Compare commits

..

89 Commits

Author SHA1 Message Date
Xe Iaso
63b8411220 Version 1.17.1: Asahi sas Brutus: Echo 1
- Added customization of authorization cookie expiration time with `--cookie-expiration-time` flag or envvar
- Updated the `OG_PASSTHROUGH` to be true by default, thereby allowing OpenGraph tags to be passed through by default
- Added the ability to [customize Anubis' HTTP status codes](./admin/configuration/custom-status-codes.mdx) ([#355](https://github.com/TecharoHQ/anubis/issues/355))

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-05-01 13:24:37 -04:00
Thomas Schuster
803aa35d66 Update known-instances.md (#407)
The FreeCAD forum is also using anubis

Signed-off-by: Thomas Schuster <twihno@gmail.com>
2025-05-01 14:27:27 +00:00
polcak
cb523333a1 Update information on workarounds for JShelter (#399)
* Update information on workarounds for JShelter

The previous version unnecessarily lowered the protection that JShelter brings to their users. This commits provides three alternatives that users can apply and the recommended one is easier than the original one and less invasive.

Signed-off-by: polcak <ipolcak@fit.vutbr.cz>

* docs(broken-extensions): amend wording, use an admonition, formatting

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: polcak <ipolcak@fit.vutbr.cz>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-05-01 13:20:39 +00:00
Jareth Gomes
91275c489f feat: make authorization cookie default expiration time customizable (#389) 2025-05-01 10:05:33 +00:00
Xe Iaso
feb3dd2bcb docs(known-instances): Comic Fanfiction Authors Archive
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-29 16:16:11 -04:00
Jason Cameron
06a762959f feat: enable Open Graph tag passthrough by default (#348)
* feat: enable Open Graph tag passthrough by default

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* docs(changelog): move opengraph passthrough on by default to unreleased

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Jason Cameron <git@jasoncameron.dev>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-04-29 19:19:46 +00:00
Xe Iaso
74d330cec5 feat(config): add ability to customize HTTP status codes Anubis returns (#393)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-29 15:13:44 -04:00
Xe Iaso
2935bd4aa7 docs(known-instances): add more Sourceware endpoints
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-29 15:08:37 -04:00
Xe Iaso
7d52e9ff5e docs(known-instances): add Sourceware
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-29 15:06:13 -04:00
Jason Cameron
4184b42282 feat(og): Foward host header (#370)
* feat(ogtags): enhance target URL handling for OGTagCache, support Unix sockets

Closes: #323 #319
Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* docs: update CHANGELOG.md to include Opengraph passthrough support for Unix sockets

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* docs: update CHANGELOG.md to include Opengraph passthrough support for Unix sockets

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* feat(ogtags): add option to consider host in Open Graph tag cache key

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* feat(ogtags): add option to consider host in OG tag cache key

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* test(ogtags): enhance tests for OGTagCache with host consideration scenarios

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* refactor(ogtags): extract constants for HTTP timeout and max content length

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* refactor(ogtags): restore fetchHTMLDocument method for cache key generation

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* refactor(ogtags): replace maxContentLength field with constant and ensure HTTP scheme is set correctly

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* fix(fetch): add proxy headers

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

---------

Signed-off-by: Jason Cameron <git@jasoncameron.dev>
2025-04-29 08:20:04 -04:00
Xe Iaso
7a20a46b0d docs(traefik): change title to Traefik
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-28 23:51:09 -04:00
dependabot[bot]
6daf08216e build(deps-dev): bump esbuild from 0.25.2 to 0.25.3 in the npm group (#388)
Bumps the npm group with 1 update: [esbuild](https://github.com/evanw/esbuild).


Updates `esbuild` from 0.25.2 to 0.25.3
- [Release notes](https://github.com/evanw/esbuild/releases)
- [Changelog](https://github.com/evanw/esbuild/blob/main/CHANGELOG.md)
- [Commits](https://github.com/evanw/esbuild/compare/v0.25.2...v0.25.3)

---
updated-dependencies:
- dependency-name: esbuild
  dependency-version: 0.25.3
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: npm
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-27 22:01:37 -04:00
dependabot[bot]
bd0e46dac3 build(deps): bump the github-actions group with 4 updates (#387)
Bumps the github-actions group with 4 updates: [docker/build-push-action](https://github.com/docker/build-push-action), [actions-hub/kubectl](https://github.com/actions-hub/kubectl), [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) and [github/codeql-action](https://github.com/github/codeql-action).


Updates `docker/build-push-action` from 6.15.0 to 6.16.0
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](471d1dc4e0...14487ce63c)

Updates `actions-hub/kubectl` from 1.32.3 to 1.33.0
- [Release notes](https://github.com/actions-hub/kubectl/releases)
- [Commits](9270913c29...e81783053d)

Updates `astral-sh/setup-uv` from 5.4.2 to 6.0.0
- [Release notes](https://github.com/astral-sh/setup-uv/releases)
- [Commits](d4b2f3b6ec...c7f87aa956)

Updates `github/codeql-action` from 3.28.15 to 3.28.16
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](45775bd823...28deaeda66)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-version: 6.16.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions
- dependency-name: actions-hub/kubectl
  dependency-version: 1.33.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions
- dependency-name: astral-sh/setup-uv
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: github/codeql-action
  dependency-version: 3.28.16
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-28 01:40:38 +00:00
Dryusdan
76514f9f32 Bump AI-robots.txt rules to version 1.29 (#383) 2025-04-27 20:52:08 -04:00
Xe Iaso
b0f0913ea2 v1.17.0: Asahi sas Brutus
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-27 15:16:25 -04:00
Xe Iaso
5423ab013a ci(packages): final pre-release yeet bump (#384)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-27 16:54:03 +00:00
Jason Cameron
301c7a42bd refactor(lib): Split up anubis.go into some smaller files. (#379)
* refactor(logging): centralize logger creation in GetLogger function

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* refactor(logging): rename GetLogger to GetRequestLogger for clarity

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* refactor: streamline error handling and response methods

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* refactor(lib): Split anubis.go up into some smaller specialized methods

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* refactor(http): simplify error response handling by using respondWithStatus

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* chore(lib): run goimports

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Jason Cameron <git@jasoncameron.dev>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-04-27 13:36:39 +00:00
Kistaro Windrider
755c18a9a7 README: Fix broken link to policy definition docs. (#380) 2025-04-27 13:33:41 +00:00
Xe Iaso
0fa9906e3a test(config): add Xesite's old policy file to known good test cases (#382)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-27 13:32:50 +00:00
p0008874
b08580ca33 docs(known-instances): add Codeberg. (#381)
Signed-off-by: p0008874 <75534590+p0008874@users.noreply.github.com>
2025-04-27 12:17:27 +00:00
Xe Iaso
d8f923974e chore: blank commit to unbreak git
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-26 13:52:13 -04:00
Xe Iaso
ef52550e70 fix(config): remove trailing newlines in regexes (#373)
Closes #372

Fun YAML fact of the day:

What is the difference between how these two expressions are parsed?

```yaml
foo: >
  bar
```

```yaml
foo: >-
  bar
```

They are invisible in yaml, but when you evaluate them to JSON the
difference is obvious:

```json
{
  "foo": "bar\n"
}
```

```json
{
  "foo": "bar"
}
```

User-Agent strings, URL path values, and HTTP headers _do_ end in
newlines in HTTP/1.1 wire form, but that newline is usually stripped
before the server actually handles it. Also HTTP/2 is a thing and does
not terminate header values with newlines.

This change makes Anubis more aggressively detect mistaken uses of the
yaml `>` operator and nudges the user into using the yaml `>-` operator
which does not append the trailing newline.

I had honestly forgotten about this YAML behavior because it wasn't
relevant for so long. Oops! Glad I released a beta.

Whenever you get into this state, Anubis will throw a config parsing
error and then give you a message hinting at the folly of your ways.

```
config.Bot: regular expression ends with newline (try >- instead of > in yaml)
```

Big thanks to https://yaml-multiline.info, this helped me realize my
folly instantly.

@aiverson, this is official permission to say "told you so".

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-26 14:01:15 +00:00
Xe Iaso
c669b47b57 fix(lib): make Anubis less paranoid (#365)
Previously Anubis would aggressively make sure that the client cookie
matched exactly what it should. This has turned out to be too paranoid
in practice and has caused problems with Happy Eyeballs et. al.

This is a potential fix to #303 and #289.
2025-04-25 15:02:55 -04:00
Jason Cameron
24f8ba729b feat: add support for a base prefix (#294)
* fix: rename variable for preventing collision in ED25519 private key handling

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* fix: remove unused import and debug print in xess.go

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* feat: introduce base path configuration for Anubis endpoints

Closes: #231
Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* hack(internal/test): skip these tests for now

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(yeet): unbreak package builds

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Jason Cameron <git@jasoncameron.dev>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-04-25 14:39:38 -04:00
Sandro
6858f66a62 Add check endpoint which can be used with nginx' auth_request function (#266)
* Add check endpoint which can be used with nginx' auth_request function

* feat(cmd): allow configuring redirect domains

* test: add test environment for the nginx_auth PR

This is a full local setup of the nginx_auth PR including HTTPS so that
it's easier to validate in isolation.

This requires an install of k3s (https://k3s.io) with traefik set to
listen on localhost. This will be amended in the future but for now this
works enough to ship it.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(cmd|lib): allow empty redirect domains variable

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(test): add space to target variable in anubis container

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(admin): rewrite subrequest auth docs, make generic

* docs(install): document REDIRECT_DOMAINS flag

Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(lib): clamp redirects to the same HTTP host

Only if REDIRECT_DOMAINS is not set.

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-04-25 17:38:02 +00:00
Xe Iaso
a5d796c679 docs(install): note that Anubis needs certain paths proxied (#363)
Closes #310

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-25 17:32:29 +00:00
Maher
4d3353fdc5 fix(docs): fix typos in Traefik integration docs (#361)
- Fix wording and typos in the`traefix.mdx` file
- Add rendering fix for the NOTE due to syntax
2025-04-25 08:47:48 -04:00
Aurelia
a420db8b8a feat: more elaborate XFF compute (#350)
* feat: more elaborate XFF compute

#328 followup

now featuring configuration and
defaults that shouldn't break most
setups.

fixes #344

* refactor: obvious condition eval order optimization

* feat: add StripLLU implementation

* chore: I'm sorry it's 7 AM

* test: add test environment for unix socket serving

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(unix-socket-xff): comment out the shell script more

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(internal): fix logic bug in XFF computation, add tests

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(internal): prevent panic in local testing

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(internal): shuffle around return values to flow better

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-04-25 11:59:55 +00:00
Xe Iaso
5a4f68d384 docs(README): sponsor: Distrust
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-25 07:53:54 -04:00
Xe Iaso
bac942d2e8 sponsor: Distrust
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-25 00:25:03 -04:00
Xe Iaso
9fab74eb8a docs(README): enable dark mode for the star history view (#360)
Closes #340

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-25 03:26:35 +00:00
Diego E
e6a1c5309f docs: Fix nginx.mdx indentation (#359)
It would seem the file was originally edited for 2-space indentation but accidentally used tabs instead of actual spaces.

Signed-off-by: Diego E <diegoe@gnome.org>
2025-04-25 00:26:59 +00:00
Tristan Ross
5c29a66fcc docs(known-instances): add NixOS Hydra (#358) 2025-04-24 23:35:29 +00:00
Remy Zandwijk
b4f9269ae4 Fix Traegik but funny typos. (#356) 2025-04-24 18:54:53 +00:00
Igor Brai
54cd99c750 Fix: mojeekbot regex (#351)
* update mojeekbot UA regex

* add fix into changelog

* hack: empty commit to unbreak CI

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-04-24 14:24:41 +00:00
luzpaz
30b0ba8055 README: represent repology badge in 3 column format (#349)
Signed-off-by: luzpaz <luzpaz@users.noreply.github.com>
2025-04-24 02:17:26 +00:00
compilade
ce425a2c21 fix(lib): use correct URL for path checker in PassChallenge (#347)
Otherwise, `r.URL.Path` was always `/.within.website/x/cmd/anubis/api/pass-challenge`
and this didn't match the path checker rules correctly,
which caused a failure when the difficulty of these rules was non-default.
2025-04-24 02:13:11 +00:00
Luciano Hillcoat - lucdev.net
2320ef4014 feat(docs): add documentation for default allow behavior (#346) 2025-04-24 01:13:21 +00:00
Xe Iaso
cfbe16f2d0 feat(xess): move CSS color definitions to CSS variables (#339)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-23 12:07:53 +00:00
dependabot[bot]
1b206175f8 build(deps): bump estree-util-value-to-estree in /docs (#336)
Bumps [estree-util-value-to-estree](https://github.com/remcohaszing/estree-util-value-to-estree) from 3.3.2 to 3.3.3.
- [Release notes](https://github.com/remcohaszing/estree-util-value-to-estree/releases)
- [Commits](https://github.com/remcohaszing/estree-util-value-to-estree/compare/v3.3.2...v3.3.3)

---
updated-dependencies:
- dependency-name: estree-util-value-to-estree
  dependency-version: 3.3.3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-23 07:09:01 -04:00
dependabot[bot]
3135abd0ec build(deps): bump http-proxy-middleware from 2.0.7 to 2.0.9 in /docs (#335)
Bumps [http-proxy-middleware](https://github.com/chimurai/http-proxy-middleware) from 2.0.7 to 2.0.9.
- [Release notes](https://github.com/chimurai/http-proxy-middleware/releases)
- [Changelog](https://github.com/chimurai/http-proxy-middleware/blob/v2.0.9/CHANGELOG.md)
- [Commits](https://github.com/chimurai/http-proxy-middleware/compare/v2.0.7...v2.0.9)

---
updated-dependencies:
- dependency-name: http-proxy-middleware
  dependency-version: 2.0.9
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-23 07:08:53 -04:00
Xe Iaso
74e11505c6 feat: enable loading config fragments (#321)
* feat(config): support importing bot policy snippets

This changes the grammar of the Anubis bot policy config to allow
importing from internal shared rules or external rules on the
filesystem.

This lets you create a file at `/data/policies/block-evilbot.yaml` and
then import it with:

```yaml
bots:
- import: /data/policies/block-evilbot.yaml
```

This also explodes the default policy file into a bunch of composable
snippets.

Thank you @Aibrew for your example gitea Atom / RSS feed rules!

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(data): update botPolicies.json to use imports

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(cmd/anubis): extract bot policies with --extract-resources

This allows a user that doesn't have anything but the Anubis binary to
figure out what the default configuration does.

* docs(data/botPolices.yaml): document import syntax in-line

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(lib/policy): better test importing from JSON snippets

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(admin): Add import syntax documentation

This documents the import syntax and is based on the block comment at
the top of the default bot policy file.

* docs(changelog): add note about importing snippets

Signed-off-by: Xe Iaso <me@xeiaso.net>

* style(lib/policy/config): use an error value instead of an inline error

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-23 07:01:28 -04:00
Aurelia
4e2c9de708 feat(cmd/anubis): compute full XFF header (#328)
* feat(cmd/anubis): compute full XFF header

this one is pretty important to not pass
through blindly, as many applications and
frameworks will trust them

* feat(cmd/anubis): skip XFF compute if remote address is loopback

* docs: update CHANGELOG
2025-04-23 04:06:47 +00:00
Xe Iaso
bec7199ab6 fix(docs): make the docs respect light/dark mode (#334)
Closes #333

I'm very bad at design so I just picked colors that looked reasonable
enough to me. Hopefully this will be enough to get us to the next stage!

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-23 04:01:02 +00:00
Jason Cameron
78bb67fbf7 fix: improve error handling and create the json encoder once #331 (#332)
* fix: improve error handling for resource closing and JSON encoding in MakeChallenge

* chore: update CHANGELOG with recent changes and improvements

* refactor: simplify RenderIndex function and improve error handling

---------

Signed-off-by: Jason Cameron <git@jasoncameron.dev>
2025-04-22 20:31:19 -04:00
Xe Iaso
2db4105479 Update known-instances.md (#324)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-22 13:25:05 +00:00
Xe Iaso
ac5a4bf58d chore(ci): migrate to TecharoHQ/yeet (#323)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-22 12:21:37 +00:00
Xe Iaso
3f1ce2d7ac data: disable generic-bot-catchall by default (#322)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-22 08:11:45 -04:00
Xe Iaso
84b28760b3 feat(lib): use Checker type instead of ad-hoc logic (#318)
This makes each check into its own type that has encapsulated check
logic, meaning that it's easier to add new checker implementations in
the future.

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-22 07:49:41 -04:00
Xe Iaso
9b7bf8ee06 docs: update default difficulty to 4
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-21 17:41:22 -04:00
Xe Iaso
1dae43f468 docs(known-instances): add Arch wiki
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-21 16:58:01 -04:00
dependabot[bot]
a14f917d68 build(deps): bump astral-sh/setup-uv in the github-actions group (#312)
Bumps the github-actions group with 1 update: [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv).


Updates `astral-sh/setup-uv` from 5.4.1 to 5.4.2
- [Release notes](https://github.com/astral-sh/setup-uv/releases)
- [Commits](0c5e2b8115...d4b2f3b6ec)

---
updated-dependencies:
- dependency-name: astral-sh/setup-uv
  dependency-version: 5.4.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-20 21:16:38 -04:00
Jason Cameron
2ecb15adac Update CHANGELOG.md (#313)
Signed-off-by: Jason Cameron <git@jasoncameron.dev>
2025-04-20 21:16:21 -04:00
Xe Iaso
d40b5cfdab lib: move config to yaml (#307)
* lib: move config to yaml

Signed-off-by: Xe Iaso <me@xeiaso.net>

* web: run go generate

Signed-off-by: Xe Iaso <me@xeiaso.net>

* Add Haiku to known instances (#304)

Signed-off-by: Asmodeus <46908100+AsmodeumX@users.noreply.github.com>

* Add headers bot rule (#300)

* Closes #291: add headers support to bot policy rules

* Fix config validator

* update docs for JSON -> YAML

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: document http header based actions

Signed-off-by: Xe Iaso <me@xeiaso.net>

* lib: add missing test

Signed-off-by: Xe Iaso <me@xeiaso.net>

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Asmodeus <46908100+AsmodeumX@users.noreply.github.com>
Co-authored-by: Asmodeus <46908100+AsmodeumX@users.noreply.github.com>
Co-authored-by: Neur0toxine <pashok9825@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-21 00:09:27 +00:00
Snoweuph
022eb59ff3 feat(docs): added info on how to configure traefik (#255)
* feat(docs): added info on how to configure traefik

* docs/admin/config/traefik: typo fixes

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-04-20 23:44:43 +00:00
Xe Iaso
65b533a014 Update known-instances.md (#309)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-20 22:31:45 +00:00
Thinkseal
2e3de07719 added an another git.lupancham.net to known instances of use (#296)
* Update CHANGELOG.md

Signed-off-by: Thinkseal <132022649+Thinkseal@users.noreply.github.com>

* Update known-instances.md to add git.lupancham.net

Signed-off-by: Thinkseal <132022649+Thinkseal@users.noreply.github.com>

---------

Signed-off-by: Thinkseal <132022649+Thinkseal@users.noreply.github.com>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-04-20 22:25:41 +00:00
Neur0toxine
7dc545cfa9 Add headers bot rule (#300)
* Closes #291: add headers support to bot policy rules

* Fix config validator
2025-04-20 22:18:21 +00:00
Asmodeus
1add24b907 Add Haiku to known instances (#304)
Signed-off-by: Asmodeus <46908100+AsmodeumX@users.noreply.github.com>
2025-04-20 22:02:03 +00:00
Xe Iaso
b15017d097 docs/admin/native-install: point people to the right places to get started easier
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-20 13:51:45 -04:00
Xe Iaso
2d22491e8c undo depot for now until I have the corp set up
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-20 09:07:54 -04:00
Xe Iaso
150523b9d3 docs/admin/environments/docker-compose: fix heading level
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-20 08:35:57 -04:00
Jason Cameron
6f652e711c Use outline shorthand (#293)
* fix(xess): suppress Go inspection warning for boolean expressions

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* feat: use outline shorthand

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

---------

Signed-off-by: Jason Cameron <git@jasoncameron.dev>
2025-04-19 16:03:11 -04:00
Xe Iaso
75b97eb03d docs/admin: break per-environment details into their own pages (#292)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-19 12:29:36 -04:00
Xe Iaso
f5827721c3 docs/admin/installation: Apache documentation (#290)
* docs/admin/installation: Apache documentation

Closes #277

This adds step by step documentation for setting up Anubis in Apache.

* docs/admin/installation: add selinux troubleshooting

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-19 14:23:56 +00:00
Dryusdan
a40c5e99fc Add more AI user agent in botPolicies.json (#249)
* Add more IA user agent in bot policies

* Update data/botPolicies.json

Signed-off-by: Xe Iaso <me@xeiaso.net>

* Fix trailling pipe that deny all requests

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-04-18 22:28:56 +00:00
Michael Jeanson
af831f0d7f Add 'Opera' to 'generic-browser' bot policy rule (#220)
After deploying Anubis bot traffic is drastically reduced but I still
see a lot of requests from User-Agents that claim to be 'Opera' like so:

"Opera/9.90.(Windows NT 6.0; mt-MT) Presto/2.9.173 Version/10.00"
"Opera/8.46.(X11; Linux i686; fo-FO) Presto/2.9.161 Version/11.00"

Add 'Opera' to the generic-browser rule to also challenge them.

Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-04-18 17:57:02 +00:00
Remilia Da Costa Faro
095e18d0c8 Allow ranges from the Internet Archive (AS7941) (#276)
* Allow ranges from the Internet Archive (AS7941)

* Updated changelog

* Update data/botPolicies.json

Signed-off-by: Xe Iaso <me@xeiaso.net>

* Removed overlapping CIDR for internet-archive in botPolicies.json

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-04-18 04:13:01 +00:00
Ryan Cao
f844dba3dc perf: embed challenge data in HTML (#279) 2025-04-18 00:06:37 -04:00
Xe Iaso
736c3ade09 .github/funding: add GitHub sponsors
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-17 23:48:36 -04:00
Jeroen Massar
b20774d9a6 Docs: add nginx with Anubis in the middle configuration example (#282)
* Add documentation example for a NGINX configuration that demonstrates how to insert Anubis in the middle of a normal configuration.

Signed-off-by: Jeroen Massar <jeroen@massar.ch>

* Add changelog entry

Signed-off-by: Jeroen Massar <jeroen@massar.ch>

* docs/admin/installation: rephrasing and diagrams

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs/admin/installation: flatten down the nginx config

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs/admin/installation: other fixups and note the assumptions at play

Thanks @SuperSandro2000!

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Jeroen Massar <jeroen@massar.ch>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-04-18 03:28:35 +00:00
Xe Iaso
2c94090fde README: add contributor images
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-16 12:40:56 -04:00
fossdd
df3509ec99 docs/blog: remove (#273)
still leftovers from the docusaurus template
2025-04-15 23:23:26 -04:00
Paul Wilde
8689143214 Create Anubis FreeBSD rc.d script (#274)
* Create anubis.freebsd

add freebsd rc.d script so can be run as a freebsd daemon

Signed-off-by: Paul Wilde <31094984+pswilde@users.noreply.github.com>

* Update CHANGELOG.md

Signed-off-by: Paul Wilde <31094984+pswilde@users.noreply.github.com>

---------

Signed-off-by: Paul Wilde <31094984+pswilde@users.noreply.github.com>
2025-04-15 12:05:13 +00:00
dependabot[bot]
5d4d2e3e2a build(deps): bump github/codeql-action in the github-actions group (#264)
Bumps the github-actions group with 1 update: [github/codeql-action](https://github.com/github/codeql-action).


Updates `github/codeql-action` from 3.28.13 to 3.28.15
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](1b549b9259...45775bd823)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 3.28.15
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-15 05:56:14 -04:00
dependabot[bot]
2ebce26709 build(deps): bump the gomod group with 3 updates (#265)
* build(deps): bump the gomod group with 3 updates

Bumps the gomod group with 3 updates: [github.com/playwright-community/playwright-go](https://github.com/playwright-community/playwright-go), [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) and [golang.org/x/net](https://github.com/golang/net).


Updates `github.com/playwright-community/playwright-go` from 0.5001.0 to 0.5101.0
- [Release notes](https://github.com/playwright-community/playwright-go/releases)
- [Commits](https://github.com/playwright-community/playwright-go/compare/v0.5001.0...v0.5101.0)

Updates `github.com/prometheus/client_golang` from 1.21.1 to 1.22.0
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.21.1...v1.22.0)

Updates `golang.org/x/net` from 0.38.0 to 0.39.0
- [Commits](https://github.com/golang/net/compare/v0.38.0...v0.39.0)

---
updated-dependencies:
- dependency-name: github.com/playwright-community/playwright-go
  dependency-version: 0.5101.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: gomod
- dependency-name: github.com/prometheus/client_golang
  dependency-version: 1.22.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: gomod
- dependency-name: golang.org/x/net
  dependency-version: 0.39.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: gomod
...

Signed-off-by: dependabot[bot] <support@github.com>

* internal/test: bump playwright version

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-04-15 05:55:50 -04:00
B4uti4github
ac273a8ad5 Update custom.css (#271) 2025-04-15 09:51:18 +00:00
Jason Cameron
9865e3ded8 fix(fetch): improve error handling for Content-Type parsing (#253)
* fix(fetch): improve error handling for Content-Type parsing

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* fix(fetch): rename OgHandledError to ErrOgHandled for statichcheck to like me

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

---------

Signed-off-by: Jason Cameron <git@jasoncameron.dev>
2025-04-13 15:59:58 -04:00
rayer
3438595f32 cmd/containerbuild/main.go: fix docker tag parsing (#260)
Change the parsing of repository and tag to match the last colon. This fixes container builds when the repository already contains an earlier colon.

Signed-off-by: rayer <70722312+rayes0@users.noreply.github.com>
2025-04-13 15:58:43 -04:00
Xe Iaso
62e20a213a use depot builders (#262)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-13 15:57:47 -04:00
hyperdefined
f2cb6ae121 feat(docs): known users cleanup (#257) 2025-04-13 18:26:42 +00:00
rayer
92dbc22db0 docs/docs/user/known-instances.md: remove duplicate scioly.org mention (#259)
Signed-off-by: rayer <70722312+rayes0@users.noreply.github.com>
2025-04-13 05:00:12 +00:00
Jason Cameron
971e781965 feat(docs): expand known instances list with new entries and collapsible sections (#254)
Signed-off-by: Jason Cameron <git@jasoncameron.dev>
2025-04-12 19:47:25 +00:00
Patrick Linnane
503f466ecf workflows: hash pin more Actions (#241)
Signed-off-by: Patrick Linnane <patrick@linnane.io>
2025-04-11 22:18:13 -04:00
Maher
81307bcb5c feat: update botPolicies for DuckDuckGo web crawler (#250)
- updates botPolicies with ips from the website
- adds the updated information to the `CHANGELOG.md` file

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-04-12 02:13:38 +00:00
fossdd
40d7b2ec55 docs/user/known-instances: add page (#214)
I've been keeping a list in my head for a while, but I think a canonical
location with most known instances could help others, e.g. for deciding
wheather to use Anubis or not and to get in contact with Anubis operators.
2025-04-12 02:04:57 +00:00
Henri Vasserman
20f1d40b61 dev: Improvements to build scripts (#232)
* dev: make sure that stuff is building properly

* chore: changelog

* remove npx
2025-04-11 22:00:48 -04:00
Xe Iaso
51bd058f2d v1.16.0 (#244)
* v1.16.0

Signed-off-by: Xe Iaso <me@xeiaso.net>

* update packaging docs

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-04-10 00:07:14 +00:00
Patrick Linnane
1614504922 workflows: hash pin Actions (#203)
Signed-off-by: Patrick Linnane <patrick@linnane.io>
2025-04-08 00:45:06 -04:00
175 changed files with 6838 additions and 3821 deletions

3
.github/FUNDING.yml vendored
View File

@@ -1 +1,2 @@
patreon: cadey
patreon: cadey
github: xe

View File

@@ -12,10 +12,10 @@ permissions:
jobs:
build:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-tags: true
fetch-depth: 0
@@ -25,7 +25,7 @@ jobs:
uses: Homebrew/actions/setup-homebrew@master
- name: Setup Homebrew cellar cache
uses: actions/cache@v4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
/home/linuxbrew/.linuxbrew/Cellar
@@ -45,11 +45,9 @@ jobs:
run: |
brew bundle
- uses: actions-rust-lang/setup-rust-toolchain@v1
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
with:
images: ghcr.io/techarohq/anubis

View File

@@ -18,10 +18,10 @@ permissions:
jobs:
build:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-tags: true
fetch-depth: 0
@@ -31,7 +31,7 @@ jobs:
uses: Homebrew/actions/setup-homebrew@master
- name: Setup Homebrew cellar cache
uses: actions/cache@v4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
/home/linuxbrew/.linuxbrew/Cellar
@@ -50,11 +50,9 @@ jobs:
- name: Install Brew dependencies
run: |
brew bundle
- uses: actions-rust-lang/setup-rust-toolchain@v1
- name: Log into registry
uses: docker/login-action@v3
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: techarohq
@@ -62,7 +60,7 @@ jobs:
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
with:
images: ghcr.io/techarohq/anubis
@@ -76,8 +74,8 @@ jobs:
SLOG_LEVEL: debug
- name: Generate artifact attestation
uses: actions/attest-build-provenance@v2
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
with:
subject-name: ghcr.io/techarohq/anubis
subject-digest: ${{ steps.build.outputs.digest }}
push-to-registry: true
push-to-registry: true

View File

@@ -13,18 +13,18 @@ permissions:
jobs:
build:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Log into registry
uses: docker/login-action@v3
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: techarohq
@@ -32,13 +32,13 @@ jobs:
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
with:
images: ghcr.io/techarohq/anubis/docs
- name: Build and push
id: build
uses: docker/build-push-action@v6
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
with:
context: ./docs
cache-to: type=gha
@@ -49,14 +49,14 @@ jobs:
push: true
- name: Apply k8s manifests to aeacus
uses: actions-hub/kubectl@master
uses: actions-hub/kubectl@e81783053d902f50d752d21a6d99cf9689a652e1 # v1.33.0
env:
KUBE_CONFIG: ${{ secrets.AEACUS_KUBECONFIG }}
with:
args: apply -k docs/manifest
- name: Apply k8s manifests to aeacus
uses: actions-hub/kubectl@master
uses: actions-hub/kubectl@e81783053d902f50d752d21a6d99cf9689a652e1 # v1.33.0
env:
KUBE_CONFIG: ${{ secrets.AEACUS_KUBECONFIG }}
with:

39
.github/workflows/docs-test.yml vendored Normal file
View File

@@ -0,0 +1,39 @@
name: Docs test build
on:
pull_request:
branches: [ "main" ]
permissions:
contents: read
actions: write
jobs:
build:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Docker meta
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
with:
images: ghcr.io/techarohq/anubis/docs
- name: Build and push
id: build
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
with:
context: ./docs
cache-to: type=gha
cache-from: type=gha
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64
push: false

View File

@@ -13,9 +13,9 @@ permissions:
jobs:
go_tests:
#runs-on: alrest-techarohq
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
@@ -28,7 +28,7 @@ jobs:
uses: Homebrew/actions/setup-homebrew@master
- name: Setup Homebrew cellar cache
uses: actions/cache@v4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
/home/linuxbrew/.linuxbrew/Cellar
@@ -48,10 +48,8 @@ jobs:
run: |
brew bundle
- uses: actions-rust-lang/setup-rust-toolchain@v1
- name: Setup Golang caches
uses: actions/cache@v4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
~/.cache/go-build
@@ -61,7 +59,7 @@ jobs:
${{ runner.os }}-golang-
- name: Cache playwright binaries
uses: actions/cache@v4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
id: playwright-cache
with:
path: |
@@ -70,8 +68,8 @@ jobs:
- name: install playwright browsers
run: |
npx --yes playwright@1.50.1 install --with-deps
npx --yes playwright@1.50.1 run-server --port 9001 &
npx --yes playwright@1.51.1 install --with-deps
npx --yes playwright@1.51.1 run-server --port 9001 &
- name: install node deps
run: |
@@ -84,6 +82,6 @@ jobs:
- name: Test
run: npm run test
- uses: dominikh/staticcheck-action@v1
- uses: dominikh/staticcheck-action@fe1dd0c3658873b46f8c9bb3291096a617310ca6 # v1.3.1
with:
version: "latest"

View File

@@ -11,9 +11,9 @@ permissions:
jobs:
package_builds:
#runs-on: alrest-techarohq
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
fetch-tags: true
@@ -28,7 +28,7 @@ jobs:
uses: Homebrew/actions/setup-homebrew@master
- name: Setup Homebrew cellar cache
uses: actions/cache@v4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
/home/linuxbrew/.linuxbrew/Cellar
@@ -49,7 +49,7 @@ jobs:
brew bundle
- name: Setup Golang caches
uses: actions/cache@v4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
~/.cache/go-build
@@ -64,8 +64,9 @@ jobs:
- name: Build Packages
run: |
wget https://github.com/Xe/x/releases/download/v1.13.4/yeet_1.13.4_amd64.deb -O var/yeet.deb
wget https://github.com/TecharoHQ/yeet/releases/download/v0.2.1/yeet_0.2.1_amd64.deb -O var/yeet.deb
sudo apt -y install -f ./var/yeet.deb
rm ./var/yeet.deb
yeet
- name: Upload released artifacts
@@ -78,4 +79,4 @@ jobs:
cd var
for file in *; do
gh release upload $RELEASE $file
done
done

View File

@@ -13,9 +13,9 @@ permissions:
jobs:
package_builds:
#runs-on: alrest-techarohq
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
fetch-tags: true
@@ -30,7 +30,7 @@ jobs:
uses: Homebrew/actions/setup-homebrew@master
- name: Setup Homebrew cellar cache
uses: actions/cache@v4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
/home/linuxbrew/.linuxbrew/Cellar
@@ -51,7 +51,7 @@ jobs:
brew bundle
- name: Setup Golang caches
uses: actions/cache@v4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
~/.cache/go-build
@@ -66,11 +66,12 @@ jobs:
- name: Build Packages
run: |
wget https://github.com/Xe/x/releases/download/v1.13.4/yeet_1.13.4_amd64.deb -O var/yeet.deb
wget https://github.com/TecharoHQ/yeet/releases/download/v0.2.1/yeet_0.2.1_amd64.deb -O var/yeet.deb
sudo apt -y install -f ./var/yeet.deb
rm ./var/yeet.deb
yeet
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: packages
path: var/*
path: var/*

View File

@@ -11,17 +11,17 @@ on:
jobs:
zizmor:
name: zizmor latest via PyPI
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
permissions:
security-events: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@v5
uses: astral-sh/setup-uv@c7f87aa956e4c323abf06d5dec078e358f6b4d04 # v6.0.0
- name: Run zizmor 🌈
run: uvx zizmor --format sarif . > results.sarif
@@ -29,7 +29,7 @@ jobs:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload SARIF file
uses: github/codeql-action/upload-sarif@v3
uses: github/codeql-action/upload-sarif@28deaeda66b76a05916b6923827895f2b14ab387 # v3.28.16
with:
sarif_file: results.sarif
category: zizmor

4
.gitignore vendored
View File

@@ -20,7 +20,3 @@ node_modules
# how does this get here
doc/VERSION
*.wasm
target

482
Cargo.lock generated
View File

@@ -1,482 +0,0 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 4
[[package]]
name = "anubis"
version = "0.1.0"
dependencies = [
"wee_alloc",
]
[[package]]
name = "argon2"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3c3610892ee6e0cbce8ae2700349fcf8f98adb0dbfbee85aec3c9179d29cc072"
dependencies = [
"base64ct",
"blake2",
"cpufeatures",
"password-hash",
]
[[package]]
name = "argon2id"
version = "0.1.0"
dependencies = [
"anubis",
"argon2",
]
[[package]]
name = "arrayvec"
version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c02d123df017efcdfbd739ef81735b36c5ba83ec3c59c80a9d7ecc718f92e50"
[[package]]
name = "autocfg"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ace50bade8e6234aa140d9a2f552bbee1db4d353f69b8217bc503490fc1a9f26"
[[package]]
name = "base64ct"
version = "1.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "89e25b6adfb930f02d1981565a6e5d9c547ac15a96606256d3b59040e5cd4ca3"
[[package]]
name = "bitflags"
version = "2.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c8214115b7bf84099f1309324e63141d4c5d7cc26862f97a0a857dbefe165bd"
[[package]]
name = "blake2"
version = "0.10.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "46502ad458c9a52b69d4d4d32775c788b7a1b85e8bc9d482d92250fc0e3f8efe"
dependencies = [
"digest 0.10.7",
]
[[package]]
name = "block-buffer"
version = "0.10.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3078c7629b62d3f0439517fa394996acacc5cbc91c5a20d8c658e77abd503a71"
dependencies = [
"generic-array",
]
[[package]]
<<<<<<< HEAD
name = "block-buffer"
version = "0.11.0-rc.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a229bfd78e4827c91b9b95784f69492c1b77c1ab75a45a8a037b139215086f94"
dependencies = [
"hybrid-array",
]
[[package]]
name = "cfg-if"
version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4785bdd1c96b2a846b2bd7cc02e86b6b3dbf14e7e53446c4f54c92a361040822"
=======
name = "byteorder"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1fd0f2584146f6f2ef48085050886acf353beff7305ebd1ae69500e27c67f64b"
>>>>>>> 8793853 (feat(wasm): broken equi-x solver)
[[package]]
name = "cfg-if"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
[[package]]
name = "const-oid"
version = "0.10.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0dabb6555f92fb9ee4140454eb5dcd14c7960e1225c6d1a6cc361f032947713e"
[[package]]
name = "cpufeatures"
version = "0.2.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "59ed5838eebb26a2bb2e58f6d5b5316989ae9d08bab10e0e6d103e656d1b0280"
dependencies = [
"libc",
]
[[package]]
name = "crypto-common"
version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1bfb12502f3fc46cca1bb51ac28df9d618d813cdc3d2f25b9fe775a34af26bb3"
dependencies = [
"generic-array",
"typenum",
]
[[package]]
name = "crypto-common"
version = "0.2.0-rc.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "170d71b5b14dec99db7739f6fc7d6ec2db80b78c3acb77db48392ccc3d8a9ea0"
dependencies = [
"hybrid-array",
]
[[package]]
name = "digest"
version = "0.10.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292"
dependencies = [
"block-buffer 0.10.4",
"crypto-common 0.1.6",
"subtle",
]
[[package]]
<<<<<<< HEAD
name = "digest"
version = "0.11.0-pre.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6c478574b20020306f98d61c8ca3322d762e1ff08117422ac6106438605ea516"
dependencies = [
"block-buffer 0.11.0-rc.4",
"const-oid",
"crypto-common 0.2.0-rc.2",
]
[[package]]
=======
name = "dynasm"
version = "3.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0cecff24995c8a5a3c3169cff4c733fe7d91aedf5d8cc96238738bfe53186b8"
dependencies = [
"bitflags",
"byteorder",
"lazy_static",
"proc-macro-error2",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "dynasmrt"
version = "3.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f5eab96b8688bcbf1d2354bcfe0261005ac1dd0616747152ada34948d4e9582"
dependencies = [
"byteorder",
"dynasm",
"fnv",
"memmap2",
]
[[package]]
name = "equix"
version = "0.1.0"
dependencies = [
"anubis",
"equix 0.2.3",
]
[[package]]
name = "equix"
version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "194df1f219a987430956f20faaf702fd4d434b1b2f7300014119854184107ac7"
dependencies = [
"arrayvec",
"hashx",
"num-traits",
"thiserror",
"visibility",
]
[[package]]
name = "fixed-capacity-vec"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6b31a14f5ee08ed1a40e1252b35af18bed062e3f39b69aab34decde36bc43e40"
[[package]]
name = "fnv"
version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1"
[[package]]
>>>>>>> 8793853 (feat(wasm): broken equi-x solver)
name = "generic-array"
version = "0.14.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a"
dependencies = [
"typenum",
"version_check",
]
[[package]]
<<<<<<< HEAD
name = "hybrid-array"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4dab50e193aebe510fe0e40230145820e02f48dae0cf339ea4204e6e708ff7bd"
dependencies = [
"typenum",
]
[[package]]
=======
name = "hashx"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "572a61c460658ae7db71878dd2caa163f47ffe041cb40aeee1483d1ffbf5e84b"
dependencies = [
"arrayvec",
"blake2",
"dynasmrt",
"fixed-capacity-vec",
"hex",
"rand_core 0.9.3",
"thiserror",
]
[[package]]
name = "hex"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "lazy_static"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe"
[[package]]
>>>>>>> 8793853 (feat(wasm): broken equi-x solver)
name = "libc"
version = "0.2.171"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c19937216e9d3aa9956d9bb8dfc0b0c8beb6058fc4f7a4dc4d850edf86a237d6"
[[package]]
<<<<<<< HEAD
name = "memory_units"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8452105ba047068f40ff7093dd1d9da90898e63dd61736462e9cdda6a90ad3c3"
=======
name = "memmap2"
version = "0.9.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fd3f7eed9d3848f8b98834af67102b720745c4ec028fcd0aa0239277e7de374f"
dependencies = [
"libc",
]
[[package]]
name = "num-traits"
version = "0.2.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "071dfc062690e90b734c0b2273ce72ad0ffa95f0c74596bc250dcfd960262841"
dependencies = [
"autocfg",
]
>>>>>>> 8793853 (feat(wasm): broken equi-x solver)
[[package]]
name = "password-hash"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "346f04948ba92c43e8469c1ee6736c7563d71012b17d40745260fe106aac2166"
dependencies = [
"base64ct",
"rand_core 0.6.4",
"subtle",
]
[[package]]
name = "proc-macro-error-attr2"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "96de42df36bb9bba5542fe9f1a054b8cc87e172759a1868aa05c1f3acc89dfc5"
dependencies = [
"proc-macro2",
"quote",
]
[[package]]
name = "proc-macro-error2"
version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "11ec05c52be0a07b08061f7dd003e7d7092e0472bc731b4af7bb1ef876109802"
dependencies = [
"proc-macro-error-attr2",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "proc-macro2"
version = "1.0.94"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a31971752e70b8b2686d7e46ec17fb38dad4051d94024c88df49b667caea9c84"
dependencies = [
"unicode-ident",
]
[[package]]
name = "quote"
version = "1.0.40"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1885c039570dc00dcb4ff087a89e185fd56bae234ddc7f056a945bf36467248d"
dependencies = [
"proc-macro2",
]
[[package]]
name = "rand_core"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c"
[[package]]
name = "rand_core"
version = "0.9.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "99d9a13982dcf210057a8a78572b2217b667c3beacbf3a0d8b454f6f82837d38"
[[package]]
name = "sha2"
version = "0.11.0-pre.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "19b4241d1a56954dce82cecda5c8e9c794eef6f53abe5e5216bac0a0ea71ffa7"
dependencies = [
"cfg-if 1.0.0",
"cpufeatures",
"digest 0.11.0-pre.10",
]
[[package]]
name = "sha256"
version = "0.1.0"
dependencies = [
"anubis",
"sha2",
]
[[package]]
name = "subtle"
version = "2.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "13c2bddecc57b384dee18652358fb23172facb8a2c51ccc10d74c157bdea3292"
[[package]]
name = "syn"
version = "2.0.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b09a44accad81e1ba1cd74a32461ba89dee89095ba17b32f5d03683b1b1fc2a0"
dependencies = [
"proc-macro2",
"quote",
"unicode-ident",
]
[[package]]
name = "thiserror"
version = "2.0.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "567b8a2dae586314f7be2a752ec7474332959c6460e02bde30d702a66d488708"
dependencies = [
"thiserror-impl",
]
[[package]]
name = "thiserror-impl"
version = "2.0.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f7cf42b4507d8ea322120659672cf1b9dbb93f8f2d4ecfd6e51350ff5b17a1d"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "typenum"
version = "1.18.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1dccffe3ce07af9386bfd29e80c0ab1a8205a2fc34e4bcd40364df902cfa8f3f"
[[package]]
name = "unicode-ident"
version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a5f39404a5da50712a4c1eecf25e90dd62b613502b7e925fd4e4d19b5c96512"
[[package]]
name = "version_check"
version = "0.9.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b928f33d975fc6ad9f86c8f283853ad26bdd5b10b7f1542aa2fa15e2289105a"
[[package]]
<<<<<<< HEAD
name = "wee_alloc"
version = "0.4.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dbb3b5a6b2bb17cb6ad44a2e68a43e8d2722c997da10e928665c72ec6c0a0b8e"
dependencies = [
"cfg-if 0.1.10",
"libc",
"memory_units",
"winapi",
]
[[package]]
name = "winapi"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
dependencies = [
"winapi-i686-pc-windows-gnu",
"winapi-x86_64-pc-windows-gnu",
]
[[package]]
name = "winapi-i686-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
=======
name = "visibility"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d674d135b4a8c1d7e813e2f8d1c9a58308aee4a680323066025e53132218bd91"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
>>>>>>> 8793853 (feat(wasm): broken equi-x solver)

View File

@@ -1,10 +0,0 @@
[workspace]
resolver = "2"
members = ["wasm/anubis", "wasm/pow/*"]
[profile.release]
#strip = true
opt-level = "s"
lto = "thin"
codegen-units = 1
panic = "abort"

View File

@@ -1,33 +1,31 @@
NODE_MODULES = node_modules
VERSION := $(shell cat ./VERSION)
VERSION= $(shell cat ./VERSION)
GO?= go
NPM?= npm
export RUSTFLAGS=-Ctarget-feature=+simd128
.PHONY: build assets deps lint prebaked-build test wasm
assets:
npm run assets
deps:
npm ci
go mod download
build: deps
npm run build
@echo "Anubis is now built to ./var/anubis"
.PHONY: build assets deps lint prebaked-build test
all: build
lint:
go vet ./...
go tool staticcheck ./...
deps:
$(NPM) ci
$(GO) mod download
assets: PATH:=$(PWD)/node_modules/.bin:$(PATH)
assets: deps
$(GO) generate ./...
./web/build.sh
./xess/build.sh
build: assets
$(GO) build -o ./var/anubis ./cmd/anubis
@echo "Anubis is now built to ./var/anubis"
lint: assets
$(GO) vet ./...
$(GO) tool staticcheck ./...
prebaked-build:
go build -o ./var/anubis -ldflags "-X 'github.com/TecharoHQ/anubis.Version=$(VERSION)'" ./cmd/anubis
$(GO) build -o ./var/anubis -ldflags "-X 'github.com/TecharoHQ/anubis.Version=$(VERSION)'" ./cmd/anubis
test:
npm run test
wasm:
cargo build --release --target wasm32-unknown-unknown
cp -vf ./target/wasm32-unknown-unknown/release/*.wasm ./web/static/wasm
test: assets
$(GO) test ./...

View File

@@ -10,11 +10,19 @@
![language count](https://img.shields.io/github/languages/count/TecharoHQ/anubis)
![repo size](https://img.shields.io/github/repo-size/TecharoHQ/anubis)
Anubis [weighs the soul of your connection](https://en.wikipedia.org/wiki/Weighing_of_souls) using a sha256 proof-of-work challenge in order to protect upstream resources from scraper bots.
## Sponsors
Installing and using this will likely result in your website not being indexed by some search engines. This is considered a feature of Anubis, not a bug.
Anubis is brought to you by sponsors and donors like:
This is a bit of a nuclear response, but AI scraper bots scraping so aggressively have forced my hand. I hate that I have to do this, but this is what we get for the modern Internet because bots don't conform to standards like robots.txt, even when they claim to.
[![Distrust](./docs/static/img/sponsors/distrust-logo.webp)](https://distrust.co)
## Overview
Anubis [weighs the soul of your connection](https://en.wikipedia.org/wiki/Weighing_of_souls) using a proof-of-work challenge in order to protect upstream resources from scraper bots.
This program is designed to help protect the small internet from the endless storm of requests that flood in from AI companies. Anubis is as lightweight as possible to ensure that everyone can afford to protect the communities closest to them.
Anubis is a bit of a nuclear response. This will result in your website being blocked from smaller scrapers and may inhibit "good bots" like the Internet Archive. You can configure [bot policy definitions](./docs/docs/admin/policies.mdx) to explicitly allowlist them and we are working on a curated set of "known good" bots to allow for a compromise between discoverability and uptime.
In most cases, you should not need this and can probably get by using Cloudflare to protect a given origin. However, for circumstances where you can't or won't use Cloudflare, Anubis is there for you.
@@ -28,11 +36,17 @@ For live chat, please join the [Patreon](https://patreon.com/cadey) and ask in t
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=TecharoHQ/anubis&type=Date)](https://www.star-history.com/#TecharoHQ/anubis&Date)
<a href="https://www.star-history.com/#TecharoHQ/anubis&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=TecharoHQ/anubis&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=TecharoHQ/anubis&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=TecharoHQ/anubis&type=Date" />
</picture>
</a>
## Packaging Status
[![Packaging status](https://repology.org/badge/vertical-allrepos/anubis-anti-crawler.svg)](https://repology.org/project/anubis-anti-crawler/versions)
[![Packaging status](https://repology.org/badge/vertical-allrepos/anubis-anti-crawler.svg?columns=3)](https://repology.org/project/anubis-anti-crawler/versions)
## Contributors

View File

@@ -1 +1 @@
1.15.1
1.17.1

View File

@@ -1,6 +1,8 @@
// Package Anubis contains the version number of Anubis.
// Package anubis contains the version number of Anubis.
package anubis
import "time"
// Version is the current version of Anubis.
//
// This variable is set at build time using the -X linker flag. If not set,
@@ -11,9 +13,18 @@ var Version = "devel"
// access.
const CookieName = "within.website-x-cmd-anubis-auth"
// CookieDefaultExpirationTime is the amount of time before the cookie/JWT expires.
const CookieDefaultExpirationTime = 7 * 24 * time.Hour
// BasePrefix is a global prefix for all Anubis endpoints. Can be emptied to remove the prefix entirely.
var BasePrefix = ""
// StaticPath is the location where all static Anubis assets are located.
const StaticPath = "/.within.website/x/cmd/anubis/"
// APIPrefix is the location where all Anubis API endpoints are located.
const APIPrefix = "/.within.website/x/cmd/anubis/api/"
// DefaultDifficulty is the default "difficulty" (number of leading zeroes)
// that must be met by the client in order to pass the challenge.
const DefaultDifficulty uint32 = 4
const DefaultDifficulty = 4

View File

@@ -20,7 +20,6 @@ import (
"os"
"os/signal"
"path/filepath"
"regexp"
"strconv"
"strings"
"sync"
@@ -28,6 +27,7 @@ import (
"time"
"github.com/TecharoHQ/anubis"
"github.com/TecharoHQ/anubis/data"
"github.com/TecharoHQ/anubis/internal"
libanubis "github.com/TecharoHQ/anubis/lib"
botPolicy "github.com/TecharoHQ/anubis/lib/policy"
@@ -38,10 +38,12 @@ import (
)
var (
basePrefix = flag.String("base-prefix", "", "base prefix (root URL) the application is served under e.g. /myapp")
bind = flag.String("bind", ":8923", "network address to bind HTTP to")
bindNetwork = flag.String("bind-network", "tcp", "network family to bind HTTP to, e.g. unix, tcp")
challengeDifficulty = flag.Int("difficulty", int(anubis.DefaultDifficulty), "difficulty of the challenge")
challengeDifficulty = flag.Int("difficulty", anubis.DefaultDifficulty, "difficulty of the challenge")
cookieDomain = flag.String("cookie-domain", "", "if set, the top-level domain that the Anubis cookie will be valid for")
cookieExpiration = flag.Duration("cookie-expiration-time", anubis.CookieDefaultExpirationTime, "The amount of time the authorization cookie is valid for")
cookiePartitioned = flag.Bool("cookie-partitioned", false, "if true, sets the partitioned flag on Anubis cookies, enabling CHIPS support")
ed25519PrivateKeyHex = flag.String("ed25519-private-key-hex", "", "private key used to sign JWTs, if not set a random one will be assigned")
ed25519PrivateKeyHexFile = flag.String("ed25519-private-key-hex-file", "", "file name containing value for ed25519-private-key-hex")
@@ -50,13 +52,15 @@ var (
socketMode = flag.String("socket-mode", "0770", "socket mode (permissions) for unix domain sockets.")
robotsTxt = flag.Bool("serve-robots-txt", false, "serve a robots.txt file that disallows all robots")
policyFname = flag.String("policy-fname", "", "full path to anubis policy document (defaults to a sensible built-in policy)")
redirectDomains = flag.String("redirect-domains", "", "list of domains separated by commas which anubis is allowed to redirect to. Leaving this unset allows any domain.")
slogLevel = flag.String("slog-level", "INFO", "logging level (see https://pkg.go.dev/log/slog#hdr-Levels)")
target = flag.String("target", "http://localhost:3923", "target to reverse proxy to")
target = flag.String("target", "http://localhost:3923", "target to reverse proxy to, set to an empty string to disable proxying when only using auth request")
healthcheck = flag.Bool("healthcheck", false, "run a health check against Anubis")
useRemoteAddress = flag.Bool("use-remote-address", false, "read the client's IP address from the network request, useful for debugging and running Anubis on bare metal")
debugBenchmarkJS = flag.Bool("debug-benchmark-js", false, "respond to every request with a challenge for benchmarking hashrate")
ogPassthrough = flag.Bool("og-passthrough", false, "enable Open Graph tag passthrough")
ogPassthrough = flag.Bool("og-passthrough", true, "enable Open Graph tag passthrough")
ogTimeToLive = flag.Duration("og-expiry-time", 24*time.Hour, "Open Graph tag cache expiration time")
ogCacheConsiderHost = flag.Bool("og-cache-consider-host", false, "enable or disable the use of the host in the Open Graph tag cache")
extractResources = flag.String("extract-resources", "", "if set, extract the static resources to the specified folder")
webmasterEmail = flag.String("webmaster-email", "", "if set, displays webmaster's email on the reject page for appeals")
)
@@ -75,7 +79,7 @@ func keyFromHex(value string) (ed25519.PrivateKey, error) {
}
func doHealthCheck() error {
resp, err := http.Get("http://localhost" + *metricsBind + "/metrics")
resp, err := http.Get("http://localhost" + *metricsBind + anubis.BasePrefix + "/metrics")
if err != nil {
return fmt.Errorf("failed to fetch metrics: %w", err)
}
@@ -118,7 +122,10 @@ func setupListener(network string, address string) (net.Listener, string) {
err = os.Chmod(address, os.FileMode(mode))
if err != nil {
listener.Close()
err := listener.Close()
if err != nil {
log.Printf("failed to close listener: %v", err)
}
log.Fatal(fmt.Errorf("could not change socket mode: %w", err))
}
}
@@ -174,14 +181,10 @@ func main() {
internal.InitSlog(*slogLevel)
if *healthcheck {
if err := doHealthCheck(); err != nil {
if *extractResources != "" {
if err := extractEmbedFS(data.BotPolicies, ".", *extractResources); err != nil {
log.Fatal(err)
}
return
}
if *extractResources != "" {
if err := extractEmbedFS(web.Static, "static", *extractResources); err != nil {
log.Fatal(err)
}
@@ -189,12 +192,17 @@ func main() {
return
}
rp, err := makeReverseProxy(*target)
if err != nil {
log.Fatalf("can't make reverse proxy: %v", err)
var rp http.Handler
// when using anubis via Systemd and environment variables, then it is not possible to set targe to an empty string but only to space
if strings.TrimSpace(*target) != "" {
var err error
rp, err = makeReverseProxy(*target)
if err != nil {
log.Fatalf("can't make reverse proxy: %v", err)
}
}
policy, err := libanubis.LoadPoliciesOrDefault(*policyFname, uint32(*challengeDifficulty))
policy, err := libanubis.LoadPoliciesOrDefault(*policyFname, *challengeDifficulty)
if err != nil {
log.Fatalf("can't parse policy file: %v", err)
}
@@ -205,24 +213,24 @@ func main() {
continue
}
hash, err := rule.Hash()
if err != nil {
log.Fatalf("can't calculate checksum of rule %s: %v", rule.Name, err)
}
hash := rule.Hash()
fmt.Printf("* %s: %s\n", rule.Name, hash)
}
fmt.Println()
// replace the bot policy rules with a single rule that always benchmarks
if *debugBenchmarkJS {
userAgent := regexp.MustCompile(".")
policy.Bots = []botPolicy.Bot{{
Name: "",
UserAgent: userAgent,
Action: config.RuleBenchmark,
Name: "",
Rules: botPolicy.NewHeaderExistsChecker("User-Agent"),
Action: config.RuleBenchmark,
}}
}
if *basePrefix != "" && !strings.HasPrefix(*basePrefix, "/") {
log.Fatalf("[misconfiguration] base-prefix must start with a slash, eg: /%s", *basePrefix)
} else if strings.HasSuffix(*basePrefix, "/") {
log.Fatalf("[misconfiguration] base-prefix must not end with a slash")
}
var priv ed25519.PrivateKey
if *ed25519PrivateKeyHex != "" && *ed25519PrivateKeyHexFile != "" {
@@ -233,12 +241,12 @@ func main() {
log.Fatalf("failed to parse and validate ED25519_PRIVATE_KEY_HEX: %v", err)
}
} else if *ed25519PrivateKeyHexFile != "" {
hex, err := os.ReadFile(*ed25519PrivateKeyHexFile)
hexFile, err := os.ReadFile(*ed25519PrivateKeyHexFile)
if err != nil {
log.Fatalf("failed to read ED25519_PRIVATE_KEY_HEX_FILE %s: %v", *ed25519PrivateKeyHexFile, err)
}
priv, err = keyFromHex(string(bytes.TrimSpace(hex)))
priv, err = keyFromHex(string(bytes.TrimSpace(hexFile)))
if err != nil {
log.Fatalf("failed to parse and validate content of ED25519_PRIVATE_KEY_HEX_FILE: %v", err)
}
@@ -251,17 +259,35 @@ func main() {
slog.Warn("generating random key, Anubis will have strange behavior when multiple instances are behind the same load balancer target, for more information: see https://anubis.techaro.lol/docs/admin/installation#key-generation")
}
var redirectDomainsList []string
if *redirectDomains != "" {
domains := strings.Split(*redirectDomains, ",")
for _, domain := range domains {
_, err = url.Parse(domain)
if err != nil {
log.Fatalf("cannot parse redirect-domain %q: %s", domain, err.Error())
}
redirectDomainsList = append(redirectDomainsList, strings.TrimSpace(domain))
}
} else {
slog.Warn("REDIRECT_DOMAINS is not set, Anubis will only redirect to the same domain a request is coming from, see https://anubis.techaro.lol/docs/admin/configuration/redirect-domains")
}
s, err := libanubis.New(libanubis.Options{
Next: rp,
Policy: policy,
ServeRobotsTXT: *robotsTxt,
PrivateKey: priv,
CookieDomain: *cookieDomain,
CookiePartitioned: *cookiePartitioned,
OGPassthrough: *ogPassthrough,
OGTimeToLive: *ogTimeToLive,
Target: *target,
WebmasterEmail: *webmasterEmail,
BasePrefix: *basePrefix,
Next: rp,
Policy: policy,
ServeRobotsTXT: *robotsTxt,
PrivateKey: priv,
CookieDomain: *cookieDomain,
CookieExpiration: *cookieExpiration,
CookiePartitioned: *cookiePartitioned,
OGPassthrough: *ogPassthrough,
OGTimeToLive: *ogTimeToLive,
RedirectDomains: redirectDomainsList,
Target: *target,
WebmasterEmail: *webmasterEmail,
OGCacheConsidersHost: *ogCacheConsiderHost,
})
if err != nil {
log.Fatalf("can't construct libanubis.Server: %v", err)
@@ -276,13 +302,13 @@ func main() {
wg.Add(1)
go metricsServer(ctx, wg.Done)
}
go startDecayMapCleanup(ctx, s)
var h http.Handler
h = s
h = internal.RemoteXRealIP(*useRemoteAddress, *bindNetwork, h)
h = internal.XForwardedForToXRealIP(h)
h = internal.XForwardedForUpdate(h)
srv := http.Server{Handler: h}
listener, listenerUrl := setupListener(*bindNetwork, *bind)
@@ -297,6 +323,8 @@ func main() {
"debug-benchmark-js", *debugBenchmarkJS,
"og-passthrough", *ogPassthrough,
"og-expiry-time", *ogTimeToLive,
"base-prefix", *basePrefix,
"cookie-expiration-time", *cookieExpiration,
)
go func() {
@@ -318,12 +346,20 @@ func metricsServer(ctx context.Context, done func()) {
defer done()
mux := http.NewServeMux()
mux.Handle("/metrics", promhttp.Handler())
mux.Handle(anubis.BasePrefix+"/metrics", promhttp.Handler())
srv := http.Server{Handler: mux}
listener, metricsUrl := setupListener(*metricsBindNetwork, *metricsBind)
slog.Debug("listening for metrics", "url", metricsUrl)
if *healthcheck {
log.Println("running healthcheck")
if err := doHealthCheck(); err != nil {
log.Fatal(err)
}
return
}
go func() {
<-ctx.Done()
c, cancel := context.WithTimeout(context.Background(), 5*time.Second)
@@ -349,7 +385,7 @@ func extractEmbedFS(fsys embed.FS, root string, destDir string) error {
return err
}
destPath := filepath.Join(destDir, relPath)
destPath := filepath.Join(destDir, root, relPath)
if d.IsDir() {
return os.MkdirAll(destPath, 0o700)

View File

@@ -123,15 +123,15 @@ func parseImageList(imageList string) ([]image, error) {
// reg.xeiaso.net/techaro/anubis:latest
// repository: reg.xeiaso.net/techaro/anubis
// tag: latest
parts := strings.SplitN(img, ":", 2)
index := strings.LastIndex(img, ":")
result = append(result, image{
repository: parts[0],
tag: parts[1],
repository: img[:index],
tag: img[index+1:],
})
}
if len(result) == 0 {
return nil, fmt.Errorf("no images provided, bad flags??")
return nil, fmt.Errorf("no images provided, bad flags")
}
return result, nil

View File

@@ -0,0 +1,7 @@
# By Aibrew: https://github.com/TecharoHQ/anubis/discussions/261#discussioncomment-12821065
- name: gitea-feed-atom
action: ALLOW
path_regex: ^/[.A-Za-z0-9_-]{1,256}?[./A-Za-z0-9_-]*\.atom$
- name: gitea-feed-rss
action: ALLOW
path_regex: ^/[.A-Za-z0-9_-]{1,256}?[./A-Za-z0-9_-]*\.rss$

View File

@@ -1,396 +1,47 @@
{
"bots": [
{
"name": "amazonbot",
"user_agent_regex": "Amazonbot",
"action": "DENY"
"import": "(data)/bots/ai-robots-txt.yaml"
},
{
"name": "googlebot",
"user_agent_regex": "\\+http\\://www\\.google\\.com/bot\\.html",
"action": "ALLOW",
"remote_addresses": [
"2001:4860:4801:10::/64",
"2001:4860:4801:11::/64",
"2001:4860:4801:12::/64",
"2001:4860:4801:13::/64",
"2001:4860:4801:14::/64",
"2001:4860:4801:15::/64",
"2001:4860:4801:16::/64",
"2001:4860:4801:17::/64",
"2001:4860:4801:18::/64",
"2001:4860:4801:19::/64",
"2001:4860:4801:1a::/64",
"2001:4860:4801:1b::/64",
"2001:4860:4801:1c::/64",
"2001:4860:4801:1d::/64",
"2001:4860:4801:1e::/64",
"2001:4860:4801:1f::/64",
"2001:4860:4801:20::/64",
"2001:4860:4801:21::/64",
"2001:4860:4801:22::/64",
"2001:4860:4801:23::/64",
"2001:4860:4801:24::/64",
"2001:4860:4801:25::/64",
"2001:4860:4801:26::/64",
"2001:4860:4801:27::/64",
"2001:4860:4801:28::/64",
"2001:4860:4801:29::/64",
"2001:4860:4801:2::/64",
"2001:4860:4801:2a::/64",
"2001:4860:4801:2b::/64",
"2001:4860:4801:2c::/64",
"2001:4860:4801:2d::/64",
"2001:4860:4801:2e::/64",
"2001:4860:4801:2f::/64",
"2001:4860:4801:31::/64",
"2001:4860:4801:32::/64",
"2001:4860:4801:33::/64",
"2001:4860:4801:34::/64",
"2001:4860:4801:35::/64",
"2001:4860:4801:36::/64",
"2001:4860:4801:37::/64",
"2001:4860:4801:38::/64",
"2001:4860:4801:39::/64",
"2001:4860:4801:3a::/64",
"2001:4860:4801:3b::/64",
"2001:4860:4801:3c::/64",
"2001:4860:4801:3d::/64",
"2001:4860:4801:3e::/64",
"2001:4860:4801:40::/64",
"2001:4860:4801:41::/64",
"2001:4860:4801:42::/64",
"2001:4860:4801:43::/64",
"2001:4860:4801:44::/64",
"2001:4860:4801:45::/64",
"2001:4860:4801:46::/64",
"2001:4860:4801:47::/64",
"2001:4860:4801:48::/64",
"2001:4860:4801:49::/64",
"2001:4860:4801:4a::/64",
"2001:4860:4801:4b::/64",
"2001:4860:4801:4c::/64",
"2001:4860:4801:50::/64",
"2001:4860:4801:51::/64",
"2001:4860:4801:52::/64",
"2001:4860:4801:53::/64",
"2001:4860:4801:54::/64",
"2001:4860:4801:55::/64",
"2001:4860:4801:56::/64",
"2001:4860:4801:60::/64",
"2001:4860:4801:61::/64",
"2001:4860:4801:62::/64",
"2001:4860:4801:63::/64",
"2001:4860:4801:64::/64",
"2001:4860:4801:65::/64",
"2001:4860:4801:66::/64",
"2001:4860:4801:67::/64",
"2001:4860:4801:68::/64",
"2001:4860:4801:69::/64",
"2001:4860:4801:6a::/64",
"2001:4860:4801:6b::/64",
"2001:4860:4801:6c::/64",
"2001:4860:4801:6d::/64",
"2001:4860:4801:6e::/64",
"2001:4860:4801:6f::/64",
"2001:4860:4801:70::/64",
"2001:4860:4801:71::/64",
"2001:4860:4801:72::/64",
"2001:4860:4801:73::/64",
"2001:4860:4801:74::/64",
"2001:4860:4801:75::/64",
"2001:4860:4801:76::/64",
"2001:4860:4801:77::/64",
"2001:4860:4801:78::/64",
"2001:4860:4801:79::/64",
"2001:4860:4801:80::/64",
"2001:4860:4801:81::/64",
"2001:4860:4801:82::/64",
"2001:4860:4801:83::/64",
"2001:4860:4801:84::/64",
"2001:4860:4801:85::/64",
"2001:4860:4801:86::/64",
"2001:4860:4801:87::/64",
"2001:4860:4801:88::/64",
"2001:4860:4801:90::/64",
"2001:4860:4801:91::/64",
"2001:4860:4801:92::/64",
"2001:4860:4801:93::/64",
"2001:4860:4801:94::/64",
"2001:4860:4801:95::/64",
"2001:4860:4801:96::/64",
"2001:4860:4801:a0::/64",
"2001:4860:4801:a1::/64",
"2001:4860:4801:a2::/64",
"2001:4860:4801:a3::/64",
"2001:4860:4801:a4::/64",
"2001:4860:4801:a5::/64",
"2001:4860:4801:c::/64",
"2001:4860:4801:f::/64",
"192.178.5.0/27",
"192.178.6.0/27",
"192.178.6.128/27",
"192.178.6.160/27",
"192.178.6.192/27",
"192.178.6.32/27",
"192.178.6.64/27",
"192.178.6.96/27",
"34.100.182.96/28",
"34.101.50.144/28",
"34.118.254.0/28",
"34.118.66.0/28",
"34.126.178.96/28",
"34.146.150.144/28",
"34.147.110.144/28",
"34.151.74.144/28",
"34.152.50.64/28",
"34.154.114.144/28",
"34.155.98.32/28",
"34.165.18.176/28",
"34.175.160.64/28",
"34.176.130.16/28",
"34.22.85.0/27",
"34.64.82.64/28",
"34.65.242.112/28",
"34.80.50.80/28",
"34.88.194.0/28",
"34.89.10.80/28",
"34.89.198.80/28",
"34.96.162.48/28",
"35.247.243.240/28",
"66.249.64.0/27",
"66.249.64.128/27",
"66.249.64.160/27",
"66.249.64.224/27",
"66.249.64.32/27",
"66.249.64.64/27",
"66.249.64.96/27",
"66.249.65.0/27",
"66.249.65.128/27",
"66.249.65.160/27",
"66.249.65.192/27",
"66.249.65.224/27",
"66.249.65.32/27",
"66.249.65.64/27",
"66.249.65.96/27",
"66.249.66.0/27",
"66.249.66.128/27",
"66.249.66.160/27",
"66.249.66.192/27",
"66.249.66.224/27",
"66.249.66.32/27",
"66.249.66.64/27",
"66.249.66.96/27",
"66.249.68.0/27",
"66.249.68.128/27",
"66.249.68.32/27",
"66.249.68.64/27",
"66.249.68.96/27",
"66.249.69.0/27",
"66.249.69.128/27",
"66.249.69.160/27",
"66.249.69.192/27",
"66.249.69.224/27",
"66.249.69.32/27",
"66.249.69.64/27",
"66.249.69.96/27",
"66.249.70.0/27",
"66.249.70.128/27",
"66.249.70.160/27",
"66.249.70.192/27",
"66.249.70.224/27",
"66.249.70.32/27",
"66.249.70.64/27",
"66.249.70.96/27",
"66.249.71.0/27",
"66.249.71.128/27",
"66.249.71.160/27",
"66.249.71.192/27",
"66.249.71.224/27",
"66.249.71.32/27",
"66.249.71.64/27",
"66.249.71.96/27",
"66.249.72.0/27",
"66.249.72.128/27",
"66.249.72.160/27",
"66.249.72.192/27",
"66.249.72.224/27",
"66.249.72.32/27",
"66.249.72.64/27",
"66.249.72.96/27",
"66.249.73.0/27",
"66.249.73.128/27",
"66.249.73.160/27",
"66.249.73.192/27",
"66.249.73.224/27",
"66.249.73.32/27",
"66.249.73.64/27",
"66.249.73.96/27",
"66.249.74.0/27",
"66.249.74.128/27",
"66.249.74.160/27",
"66.249.74.192/27",
"66.249.74.32/27",
"66.249.74.64/27",
"66.249.74.96/27",
"66.249.75.0/27",
"66.249.75.128/27",
"66.249.75.160/27",
"66.249.75.192/27",
"66.249.75.224/27",
"66.249.75.32/27",
"66.249.75.64/27",
"66.249.75.96/27",
"66.249.76.0/27",
"66.249.76.128/27",
"66.249.76.160/27",
"66.249.76.192/27",
"66.249.76.224/27",
"66.249.76.32/27",
"66.249.76.64/27",
"66.249.76.96/27",
"66.249.77.0/27",
"66.249.77.128/27",
"66.249.77.160/27",
"66.249.77.192/27",
"66.249.77.224/27",
"66.249.77.32/27",
"66.249.77.64/27",
"66.249.77.96/27",
"66.249.78.0/27",
"66.249.78.32/27",
"66.249.79.0/27",
"66.249.79.128/27",
"66.249.79.160/27",
"66.249.79.192/27",
"66.249.79.224/27",
"66.249.79.32/27",
"66.249.79.64/27",
"66.249.79.96/27"
]
"import": "(data)/bots/cloudflare-workers.yaml"
},
{
"name": "bingbot",
"user_agent_regex": "\\+http\\://www\\.bing\\.com/bingbot\\.htm",
"action": "ALLOW",
"remote_addresses": [
"157.55.39.0/24",
"207.46.13.0/24",
"40.77.167.0/24",
"13.66.139.0/24",
"13.66.144.0/24",
"52.167.144.0/24",
"13.67.10.16/28",
"13.69.66.240/28",
"13.71.172.224/28",
"139.217.52.0/28",
"191.233.204.224/28",
"20.36.108.32/28",
"20.43.120.16/28",
"40.79.131.208/28",
"40.79.186.176/28",
"52.231.148.0/28",
"20.79.107.240/28",
"51.105.67.0/28",
"20.125.163.80/28",
"40.77.188.0/22",
"65.55.210.0/24",
"199.30.24.0/23",
"40.77.202.0/24",
"40.77.139.0/25",
"20.74.197.0/28",
"20.15.133.160/27",
"40.77.177.0/24",
"40.77.178.0/23"
]
"import": "(data)/bots/headless-browsers.yaml"
},
{
"name": "qwantbot",
"user_agent_regex": "\\+https\\://help\\.qwant\\.com/bot/",
"action": "ALLOW",
"remote_addresses": [
"91.242.162.0/24"
]
"import": "(data)/bots/us-ai-scraper.yaml"
},
{
"name": "kagibot",
"user_agent_regex": "\\+https\\://kagi\\.com/bot",
"action": "ALLOW",
"remote_addresses": [
"216.18.205.234/32",
"35.212.27.76/32",
"104.254.65.50/32",
"209.151.156.194/32"
]
"import": "(data)/crawlers/googlebot.yaml"
},
{
"name": "marginalia",
"user_agent_regex": "search\\.marginalia\\.nu",
"action": "ALLOW",
"remote_addresses": [
"193.183.0.162/31",
"193.183.0.164/30",
"193.183.0.168/30",
"193.183.0.172/31",
"193.183.0.174/32"
]
"import": "(data)/crawlers/bingbot.yaml"
},
{
"name": "mojeekbot",
"user_agent_regex": "http\\://www\\.mojeek\\.com/bot\\.html",
"action": "ALLOW",
"remote_addresses": [
"5.102.173.71/32"
]
"import": "(data)/crawlers/duckduckbot.yaml"
},
{
"name": "us-artificial-intelligence-scraper",
"user_agent_regex": "\\+https\\://github\\.com/US-Artificial-Intelligence/scraper",
"action": "DENY"
"import": "(data)/crawlers/qwantbot.yaml"
},
{
"name": "well-known",
"path_regex": "^/.well-known/.*$",
"action": "ALLOW"
"import": "(data)/crawlers/internet-archive.yaml"
},
{
"name": "favicon",
"path_regex": "^/favicon.ico$",
"action": "ALLOW"
"import": "(data)/crawlers/kagibot.yaml"
},
{
"name": "robots-txt",
"path_regex": "^/robots.txt$",
"action": "ALLOW"
"import": "(data)/crawlers/marginalia.yaml"
},
{
"name": "lightpanda",
"user_agent_regex": "^Lightpanda/.*$",
"action": "DENY"
"import": "(data)/crawlers/mojeekbot.yaml"
},
{
"name": "headless-chrome",
"user_agent_regex": "HeadlessChrome",
"action": "DENY"
},
{
"name": "headless-chromium",
"user_agent_regex": "HeadlessChromium",
"action": "DENY"
},
{
"name": "generic-bot-catchall",
"user_agent_regex": "(?i:bot|crawler)",
"action": "CHALLENGE",
"challenge": {
"difficulty": 16,
"report_as": 4,
"algorithm": "slow"
}
"import": "(data)/common/keep-internet-working.yaml"
},
{
"name": "generic-browser",
"user_agent_regex": "Mozilla",
"user_agent_regex": "Mozilla|Opera",
"action": "CHALLENGE"
}
],

58
data/botPolicies.yaml Normal file
View File

@@ -0,0 +1,58 @@
## Anubis has the ability to let you import snippets of configuration into the main
## configuration file. This allows you to break up your config into smaller parts
## that get logically assembled into one big file.
##
## Of note, a bot rule can either have inline bot configuration or import a
## bot config snippet. You cannot do both in a single bot rule.
##
## Import paths can either be prefixed with (data) to import from the common/shared
## rules in the data folder in the Anubis source tree or will point to absolute/relative
## paths in your filesystem. If you don't have access to the Anubis source tree, check
## /usr/share/docs/anubis/data or in the tarball you extracted Anubis from.
bots:
# Pathological bots to deny
- # This correlates to data/bots/ai-robots-txt.yaml in the source tree
import: (data)/bots/ai-robots-txt.yaml
- import: (data)/bots/cloudflare-workers.yaml
- import: (data)/bots/headless-browsers.yaml
- import: (data)/bots/us-ai-scraper.yaml
# Search engines to allow
- import: (data)/crawlers/googlebot.yaml
- import: (data)/crawlers/bingbot.yaml
- import: (data)/crawlers/duckduckbot.yaml
- import: (data)/crawlers/qwantbot.yaml
- import: (data)/crawlers/internet-archive.yaml
- import: (data)/crawlers/kagibot.yaml
- import: (data)/crawlers/marginalia.yaml
- import: (data)/crawlers/mojeekbot.yaml
# Allow common "keeping the internet working" routes (well-known, favicon, robots.txt)
- import: (data)/common/keep-internet-working.yaml
# # Punish any bot with "bot" in the user-agent string
# # This is known to have a high false-positive rate, use at your own risk
# - name: generic-bot-catchall
# user_agent_regex: (?i:bot|crawler)
# action: CHALLENGE
# challenge:
# difficulty: 16 # impossible
# report_as: 4 # lie to the operator
# algorithm: slow # intentionally waste CPU cycles and time
# Generic catchall rule
- name: generic-browser
user_agent_regex: >-
Mozilla|Opera
action: CHALLENGE
dnsbl: false
# By default, send HTTP 200 back to clients that either get issued a challenge
# or a denial. This seems weird, but this is load-bearing due to the fact that
# the most aggressive scraper bots seem to really really want an HTTP 200 and
# will stop sending requests once they get it.
status_codes:
CHALLENGE: 200
DENY: 200

View File

@@ -0,0 +1,4 @@
- name: "ai-robots-txt"
user_agent_regex: >-
AI2Bot|Ai2Bot-Dolma|aiHitBot|Amazonbot|anthropic-ai|Applebot|Applebot-Extended|Brightbot 1.0|Bytespider|CCBot|ChatGPT-User|Claude-Web|ClaudeBot|cohere-ai|cohere-training-data-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|FacebookBot|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google-Extended|GoogleOther|GoogleOther-Image|GoogleOther-Video|GPTBot|iaskspider/2.0|ICC-Crawler|ImagesiftBot|img2dataset|imgproxy|ISSCyberRiskCrawler|Kangaroo Bot|meta-externalagent|Meta-ExternalAgent|meta-externalfetcher|Meta-ExternalFetcher|NovaAct|OAI-SearchBot|omgili|omgilibot|Operator|PanguBot|Perplexity-User|PerplexityBot|PetalBot|Scrapy|SemrushBot-OCOB|SemrushBot-SWA|Sidetrade indexer bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio-Extended|YouBot
action: DENY

View File

@@ -0,0 +1,4 @@
- name: cloudflare-workers
headers_regex:
CF-Worker: .*
action: DENY

View File

@@ -0,0 +1,9 @@
- name: lightpanda
user_agent_regex: ^LightPanda/.*$
action: DENY
- name: headless-chrome
user_agent_regex: HeadlessChrome
action: DENY
- name: headless-chromium
user_agent_regex: HeadlessChromium
action: DENY

View File

@@ -0,0 +1,3 @@
- name: us-artificial-intelligence-scraper
user_agent_regex: \+https\://github\.com/US-Artificial-Intelligence/scraper
action: DENY

View File

@@ -0,0 +1,15 @@
- name: ipv4-rfc-1918
action: ALLOW
remote_addresses:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
- 100.64.0.0/10
- name: ipv6-ula
action: ALLOW
remote_addresses:
- fc00::/7
- name: ipv6-link-local
action: ALLOW
remote_addresses:
- fe80::/10

View File

@@ -0,0 +1,10 @@
# Common "keeping the internet working" routes
- name: well-known
path_regex: ^/.well-known/.*$
action: ALLOW
- name: favicon
path_regex: ^/favicon.ico$
action: ALLOW
- name: robots-txt
path_regex: ^/robots.txt$
action: ALLOW

View File

@@ -0,0 +1,34 @@
- name: bingbot
user_agent_regex: \+http\://www\.bing\.com/bingbot\.htm
action: ALLOW
# https://www.bing.com/toolbox/bingbot.json
remote_addresses: [
"157.55.39.0/24",
"207.46.13.0/24",
"40.77.167.0/24",
"13.66.139.0/24",
"13.66.144.0/24",
"52.167.144.0/24",
"13.67.10.16/28",
"13.69.66.240/28",
"13.71.172.224/28",
"139.217.52.0/28",
"191.233.204.224/28",
"20.36.108.32/28",
"20.43.120.16/28",
"40.79.131.208/28",
"40.79.186.176/28",
"52.231.148.0/28",
"20.79.107.240/28",
"51.105.67.0/28",
"20.125.163.80/28",
"40.77.188.0/22",
"65.55.210.0/24",
"199.30.24.0/23",
"40.77.202.0/24",
"40.77.139.0/25",
"20.74.197.0/28",
"20.15.133.160/27",
"40.77.177.0/24",
"40.77.178.0/23"
]

View File

@@ -0,0 +1,275 @@
- name: duckduckbot
user_agent_regex: DuckDuckBot/1\.1; \(\+http\://duckduckgo\.com/duckduckbot\.html\)
action: ALLOW
# https://duckduckgo.com/duckduckgo-help-pages/results/duckduckbot
remote_addresses: [
"57.152.72.128/32",
"51.8.253.152/32",
"40.80.242.63/32",
"20.12.141.99/32",
"20.49.136.28/32",
"51.116.131.221/32",
"51.107.40.209/32",
"20.40.133.240/32",
"20.50.168.91/32",
"51.120.48.122/32",
"20.193.45.113/32",
"40.76.173.151/32",
"40.76.163.7/32",
"20.185.79.47/32",
"52.142.26.175/32",
"20.185.79.15/32",
"52.142.24.149/32",
"40.76.162.208/32",
"40.76.163.23/32",
"40.76.162.191/32",
"40.76.162.247/32",
"40.88.21.235/32",
"20.191.45.212/32",
"52.146.59.12/32",
"52.146.59.156/32",
"52.146.59.154/32",
"52.146.58.236/32",
"20.62.224.44/32",
"51.104.180.53/32",
"51.104.180.47/32",
"51.104.180.26/32",
"51.104.146.225/32",
"51.104.146.235/32",
"20.73.202.147/32",
"20.73.132.240/32",
"20.71.12.143/32",
"20.56.197.58/32",
"20.56.197.63/32",
"20.43.150.93/32",
"20.43.150.85/32",
"20.44.222.1/32",
"40.89.243.175/32",
"13.89.106.77/32",
"52.143.242.6/32",
"52.143.241.111/32",
"52.154.60.82/32",
"20.197.209.11/32",
"20.197.209.27/32",
"20.226.133.105/32",
"191.234.216.4/32",
"191.234.216.178/32",
"20.53.92.211/32",
"20.53.91.2/32",
"20.207.99.197/32",
"20.207.97.190/32",
"40.81.250.205/32",
"40.64.106.11/32",
"40.64.105.247/32",
"20.72.242.93/32",
"20.99.255.235/32",
"20.113.3.121/32",
"52.224.16.221/32",
"52.224.21.53/32",
"52.224.20.204/32",
"52.224.21.19/32",
"52.224.20.249/32",
"52.224.20.203/32",
"52.224.20.190/32",
"52.224.16.229/32",
"52.224.21.20/32",
"52.146.63.80/32",
"52.224.20.227/32",
"52.224.20.193/32",
"52.190.37.160/32",
"52.224.21.23/32",
"52.224.20.223/32",
"52.224.20.181/32",
"52.224.21.49/32",
"52.224.21.55/32",
"52.224.21.61/32",
"52.224.19.152/32",
"52.224.20.186/32",
"52.224.21.27/32",
"52.224.21.51/32",
"52.224.20.174/32",
"52.224.21.4/32",
"51.104.164.109/32",
"51.104.167.71/32",
"51.104.160.177/32",
"51.104.162.149/32",
"51.104.167.95/32",
"51.104.167.54/32",
"51.104.166.111/32",
"51.104.167.88/32",
"51.104.161.32/32",
"51.104.163.250/32",
"51.104.164.189/32",
"51.104.167.19/32",
"51.104.160.167/32",
"51.104.167.110/32",
"20.191.44.119/32",
"51.104.167.104/32",
"20.191.44.234/32",
"51.104.164.215/32",
"51.104.167.52/32",
"20.191.44.22/32",
"51.104.167.87/32",
"51.104.167.96/32",
"20.191.44.16/32",
"51.104.167.61/32",
"51.104.164.147/32",
"20.50.48.159/32",
"40.114.182.172/32",
"20.50.50.130/32",
"20.50.50.163/32",
"20.50.50.46/32",
"40.114.182.153/32",
"20.50.50.118/32",
"20.50.49.55/32",
"20.50.49.25/32",
"40.114.183.251/32",
"20.50.50.123/32",
"20.50.49.237/32",
"20.50.48.192/32",
"20.50.50.134/32",
"51.138.90.233/32",
"40.114.183.196/32",
"20.50.50.146/32",
"40.114.183.88/32",
"20.50.50.145/32",
"20.50.50.121/32",
"20.50.49.40/32",
"51.138.90.206/32",
"40.114.182.45/32",
"51.138.90.161/32",
"20.50.49.0/32",
"40.119.232.215/32",
"104.43.55.167/32",
"40.119.232.251/32",
"40.119.232.50/32",
"40.119.232.146/32",
"40.119.232.218/32",
"104.43.54.127/32",
"104.43.55.117/32",
"104.43.55.116/32",
"104.43.55.166/32",
"52.154.169.50/32",
"52.154.171.70/32",
"52.154.170.229/32",
"52.154.170.113/32",
"52.154.171.44/32",
"52.154.172.2/32",
"52.143.244.81/32",
"52.154.171.87/32",
"52.154.171.250/32",
"52.154.170.28/32",
"52.154.170.122/32",
"52.143.243.117/32",
"52.143.247.235/32",
"52.154.171.235/32",
"52.154.171.196/32",
"52.154.171.0/32",
"52.154.170.243/32",
"52.154.170.26/32",
"52.154.169.200/32",
"52.154.170.96/32",
"52.154.170.88/32",
"52.154.171.150/32",
"52.154.171.205/32",
"52.154.170.117/32",
"52.154.170.209/32",
"191.235.202.48/32",
"191.233.3.202/32",
"191.235.201.214/32",
"191.233.3.197/32",
"191.235.202.38/32",
"20.53.78.144/32",
"20.193.24.10/32",
"20.53.78.236/32",
"20.53.78.138/32",
"20.53.78.123/32",
"20.53.78.106/32",
"20.193.27.215/32",
"20.193.25.197/32",
"20.193.12.126/32",
"20.193.24.251/32",
"20.204.242.101/32",
"20.207.72.113/32",
"20.204.242.19/32",
"20.219.45.67/32",
"20.207.72.11/32",
"20.219.45.190/32",
"20.204.243.55/32",
"20.204.241.148/32",
"20.207.72.110/32",
"20.204.240.172/32",
"20.207.72.21/32",
"20.204.246.81/32",
"20.207.107.181/32",
"20.204.246.254/32",
"20.219.43.246/32",
"52.149.25.43/32",
"52.149.61.51/32",
"52.149.58.139/32",
"52.149.60.38/32",
"52.148.165.38/32",
"52.143.95.162/32",
"52.149.56.151/32",
"52.149.30.45/32",
"52.149.58.173/32",
"52.143.95.204/32",
"52.149.28.83/32",
"52.149.58.69/32",
"52.148.161.87/32",
"52.149.58.27/32",
"52.149.28.18/32",
"20.79.226.26/32",
"20.79.239.66/32",
"20.79.238.198/32",
"20.113.14.159/32",
"20.75.144.152/32",
"20.43.172.120/32",
"20.53.134.160/32",
"20.201.15.208/32",
"20.93.28.24/32",
"20.61.34.40/32",
"52.242.224.168/32",
"20.80.129.80/32",
"20.195.108.47/32",
"4.195.133.120/32",
"4.228.76.163/32",
"4.182.131.108/32",
"4.209.224.56/32",
"108.141.83.74/32",
"4.213.46.14/32",
"172.169.17.165/32",
"51.8.71.117/32",
"20.3.1.178/32",
"52.149.56.151/32",
"52.149.30.45/32",
"52.149.58.173/32",
"52.143.95.204/32",
"52.149.28.83/32",
"52.149.58.69/32",
"52.148.161.87/32",
"52.149.58.27/32",
"52.149.28.18/32",
"20.79.226.26/32",
"20.79.239.66/32",
"20.79.238.198/32",
"20.113.14.159/32",
"20.75.144.152/32",
"20.43.172.120/32",
"20.53.134.160/32",
"20.201.15.208/32",
"20.93.28.24/32",
"20.61.34.40/32",
"52.242.224.168/32",
"20.80.129.80/32",
"20.195.108.47/32",
"4.195.133.120/32",
"4.228.76.163/32",
"4.182.131.108/32",
"4.209.224.56/32",
"108.141.83.74/32",
"4.213.46.14/32",
"172.169.17.165/32",
"51.8.71.117/32",
"20.3.1.178/32"
]

View File

@@ -0,0 +1,263 @@
- name: googlebot
user_agent_regex: \+http\://www\.google\.com/bot\.html
action: ALLOW
# https://developers.google.com/static/search/apis/ipranges/googlebot.json
remote_addresses: [
"2001:4860:4801:10::/64",
"2001:4860:4801:11::/64",
"2001:4860:4801:12::/64",
"2001:4860:4801:13::/64",
"2001:4860:4801:14::/64",
"2001:4860:4801:15::/64",
"2001:4860:4801:16::/64",
"2001:4860:4801:17::/64",
"2001:4860:4801:18::/64",
"2001:4860:4801:19::/64",
"2001:4860:4801:1a::/64",
"2001:4860:4801:1b::/64",
"2001:4860:4801:1c::/64",
"2001:4860:4801:1d::/64",
"2001:4860:4801:1e::/64",
"2001:4860:4801:1f::/64",
"2001:4860:4801:20::/64",
"2001:4860:4801:21::/64",
"2001:4860:4801:22::/64",
"2001:4860:4801:23::/64",
"2001:4860:4801:24::/64",
"2001:4860:4801:25::/64",
"2001:4860:4801:26::/64",
"2001:4860:4801:27::/64",
"2001:4860:4801:28::/64",
"2001:4860:4801:29::/64",
"2001:4860:4801:2::/64",
"2001:4860:4801:2a::/64",
"2001:4860:4801:2b::/64",
"2001:4860:4801:2c::/64",
"2001:4860:4801:2d::/64",
"2001:4860:4801:2e::/64",
"2001:4860:4801:2f::/64",
"2001:4860:4801:31::/64",
"2001:4860:4801:32::/64",
"2001:4860:4801:33::/64",
"2001:4860:4801:34::/64",
"2001:4860:4801:35::/64",
"2001:4860:4801:36::/64",
"2001:4860:4801:37::/64",
"2001:4860:4801:38::/64",
"2001:4860:4801:39::/64",
"2001:4860:4801:3a::/64",
"2001:4860:4801:3b::/64",
"2001:4860:4801:3c::/64",
"2001:4860:4801:3d::/64",
"2001:4860:4801:3e::/64",
"2001:4860:4801:40::/64",
"2001:4860:4801:41::/64",
"2001:4860:4801:42::/64",
"2001:4860:4801:43::/64",
"2001:4860:4801:44::/64",
"2001:4860:4801:45::/64",
"2001:4860:4801:46::/64",
"2001:4860:4801:47::/64",
"2001:4860:4801:48::/64",
"2001:4860:4801:49::/64",
"2001:4860:4801:4a::/64",
"2001:4860:4801:4b::/64",
"2001:4860:4801:4c::/64",
"2001:4860:4801:50::/64",
"2001:4860:4801:51::/64",
"2001:4860:4801:52::/64",
"2001:4860:4801:53::/64",
"2001:4860:4801:54::/64",
"2001:4860:4801:55::/64",
"2001:4860:4801:56::/64",
"2001:4860:4801:60::/64",
"2001:4860:4801:61::/64",
"2001:4860:4801:62::/64",
"2001:4860:4801:63::/64",
"2001:4860:4801:64::/64",
"2001:4860:4801:65::/64",
"2001:4860:4801:66::/64",
"2001:4860:4801:67::/64",
"2001:4860:4801:68::/64",
"2001:4860:4801:69::/64",
"2001:4860:4801:6a::/64",
"2001:4860:4801:6b::/64",
"2001:4860:4801:6c::/64",
"2001:4860:4801:6d::/64",
"2001:4860:4801:6e::/64",
"2001:4860:4801:6f::/64",
"2001:4860:4801:70::/64",
"2001:4860:4801:71::/64",
"2001:4860:4801:72::/64",
"2001:4860:4801:73::/64",
"2001:4860:4801:74::/64",
"2001:4860:4801:75::/64",
"2001:4860:4801:76::/64",
"2001:4860:4801:77::/64",
"2001:4860:4801:78::/64",
"2001:4860:4801:79::/64",
"2001:4860:4801:80::/64",
"2001:4860:4801:81::/64",
"2001:4860:4801:82::/64",
"2001:4860:4801:83::/64",
"2001:4860:4801:84::/64",
"2001:4860:4801:85::/64",
"2001:4860:4801:86::/64",
"2001:4860:4801:87::/64",
"2001:4860:4801:88::/64",
"2001:4860:4801:90::/64",
"2001:4860:4801:91::/64",
"2001:4860:4801:92::/64",
"2001:4860:4801:93::/64",
"2001:4860:4801:94::/64",
"2001:4860:4801:95::/64",
"2001:4860:4801:96::/64",
"2001:4860:4801:a0::/64",
"2001:4860:4801:a1::/64",
"2001:4860:4801:a2::/64",
"2001:4860:4801:a3::/64",
"2001:4860:4801:a4::/64",
"2001:4860:4801:a5::/64",
"2001:4860:4801:c::/64",
"2001:4860:4801:f::/64",
"192.178.5.0/27",
"192.178.6.0/27",
"192.178.6.128/27",
"192.178.6.160/27",
"192.178.6.192/27",
"192.178.6.32/27",
"192.178.6.64/27",
"192.178.6.96/27",
"34.100.182.96/28",
"34.101.50.144/28",
"34.118.254.0/28",
"34.118.66.0/28",
"34.126.178.96/28",
"34.146.150.144/28",
"34.147.110.144/28",
"34.151.74.144/28",
"34.152.50.64/28",
"34.154.114.144/28",
"34.155.98.32/28",
"34.165.18.176/28",
"34.175.160.64/28",
"34.176.130.16/28",
"34.22.85.0/27",
"34.64.82.64/28",
"34.65.242.112/28",
"34.80.50.80/28",
"34.88.194.0/28",
"34.89.10.80/28",
"34.89.198.80/28",
"34.96.162.48/28",
"35.247.243.240/28",
"66.249.64.0/27",
"66.249.64.128/27",
"66.249.64.160/27",
"66.249.64.224/27",
"66.249.64.32/27",
"66.249.64.64/27",
"66.249.64.96/27",
"66.249.65.0/27",
"66.249.65.128/27",
"66.249.65.160/27",
"66.249.65.192/27",
"66.249.65.224/27",
"66.249.65.32/27",
"66.249.65.64/27",
"66.249.65.96/27",
"66.249.66.0/27",
"66.249.66.128/27",
"66.249.66.160/27",
"66.249.66.192/27",
"66.249.66.224/27",
"66.249.66.32/27",
"66.249.66.64/27",
"66.249.66.96/27",
"66.249.68.0/27",
"66.249.68.128/27",
"66.249.68.32/27",
"66.249.68.64/27",
"66.249.68.96/27",
"66.249.69.0/27",
"66.249.69.128/27",
"66.249.69.160/27",
"66.249.69.192/27",
"66.249.69.224/27",
"66.249.69.32/27",
"66.249.69.64/27",
"66.249.69.96/27",
"66.249.70.0/27",
"66.249.70.128/27",
"66.249.70.160/27",
"66.249.70.192/27",
"66.249.70.224/27",
"66.249.70.32/27",
"66.249.70.64/27",
"66.249.70.96/27",
"66.249.71.0/27",
"66.249.71.128/27",
"66.249.71.160/27",
"66.249.71.192/27",
"66.249.71.224/27",
"66.249.71.32/27",
"66.249.71.64/27",
"66.249.71.96/27",
"66.249.72.0/27",
"66.249.72.128/27",
"66.249.72.160/27",
"66.249.72.192/27",
"66.249.72.224/27",
"66.249.72.32/27",
"66.249.72.64/27",
"66.249.72.96/27",
"66.249.73.0/27",
"66.249.73.128/27",
"66.249.73.160/27",
"66.249.73.192/27",
"66.249.73.224/27",
"66.249.73.32/27",
"66.249.73.64/27",
"66.249.73.96/27",
"66.249.74.0/27",
"66.249.74.128/27",
"66.249.74.160/27",
"66.249.74.192/27",
"66.249.74.32/27",
"66.249.74.64/27",
"66.249.74.96/27",
"66.249.75.0/27",
"66.249.75.128/27",
"66.249.75.160/27",
"66.249.75.192/27",
"66.249.75.224/27",
"66.249.75.32/27",
"66.249.75.64/27",
"66.249.75.96/27",
"66.249.76.0/27",
"66.249.76.128/27",
"66.249.76.160/27",
"66.249.76.192/27",
"66.249.76.224/27",
"66.249.76.32/27",
"66.249.76.64/27",
"66.249.76.96/27",
"66.249.77.0/27",
"66.249.77.128/27",
"66.249.77.160/27",
"66.249.77.192/27",
"66.249.77.224/27",
"66.249.77.32/27",
"66.249.77.64/27",
"66.249.77.96/27",
"66.249.78.0/27",
"66.249.78.32/27",
"66.249.79.0/27",
"66.249.79.128/27",
"66.249.79.160/27",
"66.249.79.192/27",
"66.249.79.224/27",
"66.249.79.32/27",
"66.249.79.64/27",
"66.249.79.96/27"
]

View File

@@ -0,0 +1,8 @@
- name: internet-archive
action: ALLOW
# https://ipinfo.io/AS7941
remote_addresses: [
"207.241.224.0/20",
"208.70.24.0/21",
"2620:0:9c0::/48"
]

View File

@@ -0,0 +1,10 @@
- name: kagibot
user_agent_regex: \+https\://kagi\.com/bot
action: ALLOW
# https://kagi.com/bot
remote_addresses: [
"216.18.205.234/32",
"35.212.27.76/32",
"104.254.65.50/32",
"209.151.156.194/32"
]

View File

@@ -0,0 +1,11 @@
- name: marginalia
user_agent_regex: search\.marginalia\.nu
action: ALLOW
# Received directly over email
remote_addresses: [
"193.183.0.162/31",
"193.183.0.164/30",
"193.183.0.168/30",
"193.183.0.172/31",
"193.183.0.174/32"
]

View File

@@ -0,0 +1,5 @@
- name: mojeekbot
user_agent_regex: \+https\://www\.mojeek\.com/bot\.html
action: ALLOW
# https://www.mojeek.com/bot.html
remote_addresses: [ "5.102.173.71/32" ]

View File

@@ -0,0 +1,5 @@
- name: qwantbot
user_agent_regex: \+https\://help\.qwant\.com/bot/
action: ALLOW
# https://help.qwant.com/wp-content/uploads/sites/2/2025/01/qwantbot.json
remote_addresses: [ "91.242.162.0/24" ]

View File

@@ -3,6 +3,6 @@ package data
import "embed"
var (
//go:embed botPolicies.json
//go:embed botPolicies.yaml botPolicies.json apps bots common crawlers
BotPolicies embed.FS
)

View File

@@ -1,12 +0,0 @@
---
slug: first-blog-post
title: First Blog Post
authors: [slorber, yangshun]
tags: [hola, docusaurus]
---
Lorem ipsum dolor sit amet...
<!-- truncate -->
...consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet

View File

@@ -1,44 +0,0 @@
---
slug: long-blog-post
title: Long Blog Post
authors: yangshun
tags: [hello, docusaurus]
---
This is the summary of a very long blog post,
Use a `<!--` `truncate` `-->` comment to limit blog post size in the list view.
<!-- truncate -->
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet

View File

@@ -1,24 +0,0 @@
---
slug: mdx-blog-post
title: MDX Blog Post
authors: [slorber]
tags: [docusaurus]
---
Blog posts support [Docusaurus Markdown features](https://docusaurus.io/docs/markdown-features), such as [MDX](https://mdxjs.com/).
:::tip
Use the power of React to create interactive blog posts.
:::
{/* truncate */}
For example, use JSX to create an interactive button:
```js
<button onClick={() => alert('button clicked!')}>Click me!</button>
```
<button onClick={() => alert('button clicked!')}>Click me!</button>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

View File

@@ -1,29 +0,0 @@
---
slug: welcome
title: Welcome
authors: [slorber, yangshun]
tags: [facebook, hello, docusaurus]
---
[Docusaurus blogging features](https://docusaurus.io/docs/blog) are powered by the [blog plugin](https://docusaurus.io/docs/api/plugins/@docusaurus/plugin-content-blog).
Here are a few tips you might find useful.
<!-- truncate -->
Simply add Markdown files (or folders) to the `blog` directory.
Regular blog authors can be added to `authors.yml`.
The blog post date can be extracted from filenames, such as:
- `2019-05-30-welcome.md`
- `2019-05-30-welcome/index.md`
A blog post folder can be convenient to co-locate blog post images:
![Docusaurus Plushie](./docusaurus-plushie-banner.jpeg)
The blog supports tags as well!
**And if you don't want a blog**: just delete this directory, and use `blog: false` in your Docusaurus config.

View File

@@ -1,23 +0,0 @@
yangshun:
name: Yangshun Tay
title: Front End Engineer @ Facebook
url: https://github.com/yangshun
image_url: https://github.com/yangshun.png
page: true
socials:
x: yangshunz
github: yangshun
slorber:
name: Sébastien Lorber
title: Docusaurus maintainer
url: https://sebastienlorber.com
image_url: https://github.com/slorber.png
page:
# customize the url of the author page at /blog/authors/<permalink>
permalink: '/all-sebastien-lorber-articles'
socials:
x: sebastienlorber
linkedin: sebastienlorber
github: slorber
newsletter: https://thisweekinreact.com

View File

@@ -1,19 +0,0 @@
facebook:
label: Facebook
permalink: /facebook
description: Facebook tag description
hello:
label: Hello
permalink: /hello
description: Hello tag description
docusaurus:
label: Docusaurus
permalink: /docusaurus
description: Docusaurus tag description
hola:
label: Hola
permalink: /hola
description: Hola tag description

View File

@@ -11,37 +11,89 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
- Added support for native Debian, Red Hat, and tarball packaging strategies including installation and use directions.
- A prebaked tarball has been added, allowing distros to build Anubis like they could in v1.15.x.
- The placeholder Anubis mascot has been replaced with a design by [CELPHASE](https://bsky.app/profile/celphase.bsky.social).
- Added a periodic cleanup routine for the decaymap that removes expired entries, ensuring stale data is properly pruned.
## v1.17.1: Asahi sas Brutus: Echo 1
- Added customization of authorization cookie expiration time with `--cookie-expiration-time` flag or envvar
- Updated the `OG_PASSTHROUGH` to be true by default, thereby allowing OpenGraph tags to be passed through by default
- Added the ability to [customize Anubis' HTTP status codes](./admin/configuration/custom-status-codes.mdx) ([#355](https://github.com/TecharoHQ/anubis/issues/355))
## v1.17.0: Asahi sas Brutus
- Ensure regexes can't end in newlines ([#372](https://github.com/TecharoHQ/anubis/issues/372))
- Add documentation for default allow behavior (implicit rule)
- Enable [importing configuration snippets](./admin/configuration/import.mdx) ([#321](https://github.com/TecharoHQ/anubis/pull/321))
- Refactor check logic to be more generic and work on a Checker type
- Add more AI user agents based on the [ai.robots.txt](https://github.com/ai-robots-txt/ai.robots.txt) project
- Embedded challenge data in initial HTML response to improve performance
- Added support to use Nginx' `auth_request` directive with Anubis
- Added support to allow to restrict the allowed redirect domains
- Whitelisted [DuckDuckBot](https://duckduckgo.com/duckduckgo-help-pages/results/duckduckbot/) in botPolicies
- Improvements to build scripts to make them less independent of the build host
- Improved the OpenGraph error logging
- Added `Opera` to the `generic-browser` bot policy rule
- Added FreeBSD rc.d script so can be run as a FreeBSD daemon
- Allow requests from the Internet Archive
- Added example nginx configuration to documentation
- Added example Apache configuration to the documentation [#277](https://github.com/TecharoHQ/anubis/issues/277)
- Move per-environment configuration details into their own pages
- Added support for running anubis behind a prefix (e.g. `/myapp`)
- Added headers support to bot policy rules
- Moved configuration file from JSON to YAML by default
- Added documentation on how to use Anubis with Traefik in Docker
- Improved error handling in some edge cases
- Disable `generic-bot-catchall` rule because of its high false positive rate in real-world scenarios
- Moved all CSS inline to the Xess package, changed colors to be CSS variables
- Set or append to `X-Forwarded-For` header unless the remote connects over a loopback address [#328](https://github.com/TecharoHQ/anubis/issues/328)
- Fixed mojeekbot user agent regex
- Added support for running anubis behind a base path (e.g. `/myapp`)
- Reduce Anubis' paranoia with user cookies ([#365](https://github.com/TecharoHQ/anubis/pull/365))
- Added support for Opengraph passthrough while using unix sockets
- The opengraph subsystem now passes the HTTP `HOST` header through to the origin
- Updated the `OG_PASSTHROUGH` to be true by default, thereby allowing OpenGraph tags to be passed through by default
## v1.16.0
Fordola rem Lupis
> I want to make them pay! All of them! Everyone who ever mocked or looked down on me -- I want the power to make them pay!
The following features are the "big ticket" items:
- Added support for native Debian, Red Hat, and tarball packaging strategies including installation and use directions
- A prebaked tarball has been added, allowing distros to build Anubis like they could in v1.15.x
- The placeholder Anubis mascot has been replaced with a design by [CELPHASE](https://bsky.app/profile/celphase.bsky.social)
- Verification page now shows hash rate and a progress bar for completion probability
- Added support for [OpenGraph tags](https://ogp.me/) when rendering the challenge page. This allows for social previews to be generated when sharing the challenge page on social media platforms ([#195](https://github.com/TecharoHQ/anubis/pull/195))
- Added support for passing the ed25519 signing key in a file with `-ed25519-private-key-hex-file` or `ED25519_PRIVATE_KEY_HEX_FILE`
The other small fixes have been made:
- Added a periodic cleanup routine for the decaymap that removes expired entries, ensuring stale data is properly pruned
- Added a no-store Cache-Control header to the challenge page
- Hide the directory listings for Anubis' internal static content
- Changed `--debug-x-real-ip-default` to `--use-remote-address`, getting the IP address from the request's socket address instead.
- Changed `--debug-x-real-ip-default` to `--use-remote-address`, getting the IP address from the request's socket address instead
- DroneBL lookups have been disabled by default
- Static asset builds are now done on demand instead of the results being committed to source control
- The Dockerfile has been removed as it is no longer in use
- Developer documentation has been added to the docs site
- Show more errors when some predictable challenge page errors happen ([#150](https://github.com/TecharoHQ/anubis/issues/150))
- Verification page now shows hash rate and a progress bar for completion probability.
- Added the `--debug-benchmark-js` flag for testing proof-of-work performance during development.
- Added the `--debug-benchmark-js` flag for testing proof-of-work performance during development
- Use `TrimSuffix` instead of `TrimRight` on containerbuild
- Fix the startup logs to correctly show the address and port the server is listening on
- Add [LibreJS](https://www.gnu.org/software/librejs/) banner to Anubis JavaScript to allow LibreJS users to run the challenge
- Added a wait with button continue + 30 second auto continue after 30s if you click "Why am I seeing this?"
- Fixed a typo in the challenge page title.
- Disabled running integration tests on Windows hosts due to it's reliance on posix features (see [#133](https://github.com/TecharoHQ/anubis/pull/133#issuecomment-2764732309)).
- Added support for passing the ed25519 signing key in a file with `-ed25519-private-key-hex-file` or `ED25519_PRIVATE_KEY_HEX_FILE`.
- Fixed a typo in the challenge page title
- Disabled running integration tests on Windows hosts due to it's reliance on posix features (see [#133](https://github.com/TecharoHQ/anubis/pull/133#issuecomment-2764732309))
- Fixed minor typos
- Added a Makefile to enable comfortable workflows for downstream packagers.
- Added a Makefile to enable comfortable workflows for downstream packagers
- Added `zizmor` for GitHub Actions static analysis
- Fixed most `zizmor` findings
- Enabled Dependabot
- Added an air config for autoreload support in development ([#195](https://github.com/TecharoHQ/anubis/pull/195))
- Added support for [OpenGraph tags](https://ogp.me/) when rendering the challenge page. This allows for social previews to be generated when sharing the challenge page on social media platforms ([#195](https://github.com/TecharoHQ/anubis/pull/195))
- Added an `--extract-resources` flag to extract static resources to a local folder.
- Add noindex flag to all Anubis pages ([#227](https://github.com/TecharoHQ/anubis/issues/227)).
- Added an `--extract-resources` flag to extract static resources to a local folder
- Add noindex flag to all Anubis pages ([#227](https://github.com/TecharoHQ/anubis/issues/227))
- Added `WEBMASTER_EMAIL` variable, if it is present then display that email address on error pages ([#235](https://github.com/TecharoHQ/anubis/pull/235), [#115](https://github.com/TecharoHQ/anubis/issues/115))
- Hash pinned all GitHub Actions
## v1.15.1
@@ -124,7 +176,7 @@ Livia sas Junius
[#21](https://github.com/TecharoHQ/anubis/pull/21)
- Don't overflow the image when browser windows are small (eg. on phones)
[#27](https://github.com/TecharoHQ/anubis/pull/27)
- Lower the default difficulty to 4 from 5
- Lower the default difficulty to 5 from 4
- Don't duplicate work across multiple threads [#36](https://github.com/TecharoHQ/anubis/pull/36)
- Documentation has been moved to https://anubis.techaro.lol/ with sources in docs/
- Removed several visible AI artifacts (e.g., 6 fingers) [#37](https://github.com/TecharoHQ/anubis/pull/37)
@@ -167,4 +219,4 @@ Livia sas Junius
([fd6903a](https://github.com/TecharoHQ/anubis/commit/fd6903aeed315b8fddee32890d7458a9271e4798)).
- Footer links on the check page now point to Techaro's brand
([4ebccb1](https://github.com/TecharoHQ/anubis/commit/4ebccb197ec20d024328d7f92cad39bbbe4d6359))
- Anubis was imported from [Xe/x](https://github.com/Xe/x).
- Anubis was imported from [Xe/x](https://github.com/Xe/x)

View File

@@ -0,0 +1,19 @@
# Custom status codes for Anubis errors
Out of the box, Anubis will reply with `HTTP 200` for challenge and denial pages. This is intended to make AI scrapers have a hard time with your website because when they are faced with a non-200 response, they will hammer the page over and over until they get a 200 response. This behavior may not be desirable, as such Anubis lets you customize what HTTP status codes are returned when Anubis throws challenge and denial pages.
This is configured in the `status_codes` block of your [bot policy file](../policies.mdx):
```yaml
status_codes:
CHALLENGE: 200
DENY: 200
```
To match CloudFlare's behavior, use a configuration like this:
```yaml
status_codes:
CHALLENGE: 403
DENY: 403
```

View File

@@ -0,0 +1,147 @@
# Importing configuration rules
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
Anubis has the ability to let you import snippets of configuration into the main configuration file. This allows you to break up your config into smaller parts that get logically assembled into one big file.
EG:
<Tabs>
<TabItem value="json" label="JSON">
```json
{
"bots": [
{
"import": "(data)/bots/ai-robots-txt.yaml"
},
{
"import": "(data)/bots/cloudflare-workers.yaml"
}
]
}
```
</TabItem>
<TabItem value="yaml" label="YAML" default>
```yaml
bots:
# Pathological bots to deny
- # This correlates to data/bots/ai-robots-txt.yaml in the source tree
import: (data)/bots/ai-robots-txt.yaml
- import: (data)/bots/cloudflare-workers.yaml
```
</TabItem>
</Tabs>
Of note, a bot rule can either have inline bot configuration or import a bot config snippet. You cannot do both in a single bot rule.
<Tabs>
<TabItem value="json" label="JSON">
```json
{
"bots": [
{
"import": "(data)/bots/ai-robots-txt.yaml",
"name": "generic-browser",
"user_agent_regex": "Mozilla|Opera\n",
"action": "CHALLENGE"
}
]
}
```
</TabItem>
<TabItem value="yaml" label="YAML" default>
```yaml
bots:
- import: (data)/bots/ai-robots-txt.yaml
name: generic-browser
user_agent_regex: >
Mozilla|Opera
action: CHALLENGE
```
</TabItem>
</Tabs>
This will return an error like this:
```text
config is not valid:
config.BotOrImport: rule definition is invalid, you must set either bot rules or an import statement, not both
```
Paths can either be prefixed with `(data)` to import from the [the data folder in the Anubis source tree](https://github.com/TecharoHQ/anubis/tree/main/data) or anywhere on the filesystem. If you don't have access to the Anubis source tree, check /usr/share/docs/anubis/data or in the tarball you extracted Anubis from.
## Writing snippets
Snippets can be written in either JSON or YAML, with a preference for YAML. When writing a snippet, write the bot rules you want directly at the top level of the file in a list.
Here is an example snippet that allows [IPv6 Unique Local Addresses](https://en.wikipedia.org/wiki/Unique_local_address) through Anubis:
<Tabs>
<TabItem value="json" label="JSON">
```json
[
{
"name": "ipv6-ula",
"action": "ALLOW",
"remote_addresses": ["fc00::/7"]
}
]
```
</TabItem>
<TabItem value="yaml" label="YAML" default>
```yaml
- name: ipv6-ula
action: ALLOW
remote_addresses:
- fc00::/7
```
</TabItem>
</Tabs>
## Extracting Anubis' embedded filesystem
You can always extract the list of rules embedded into the Anubis binary with this command:
```text
anubis --extract-resources=static
```
This will dump the contents of Anubis' embedded data to a new folder named `static`:
```text
static
├── apps
│ └── gitea-rss-feeds.yaml
├── botPolicies.json
├── botPolicies.yaml
├── bots
│ ├── ai-robots-txt.yaml
│ ├── cloudflare-workers.yaml
│ ├── headless-browsers.yaml
│ └── us-ai-scraper.yaml
├── common
│ ├── allow-private-addresses.yaml
│ └── keep-internet-working.yaml
└── crawlers
├── bingbot.yaml
├── duckduckbot.yaml
├── googlebot.yaml
├── internet-archive.yaml
├── kagibot.yaml
├── marginalia.yaml
├── mojeekbot.yaml
└── qwantbot.yaml
```

View File

@@ -9,10 +9,11 @@ This page provides detailed information on how to configure [OpenGraph tag](http
## Configuration Options
| Name | Description | Type | Default | Example |
|------------------|-----------------------------------------------------------|----------|---------|-------------------------|
| `OG_PASSTHROUGH` | Enables or disables the Open Graph tag passthrough system | Boolean | `false` | `OG_PASSTHROUGH=true` |
| `OG_EXPIRY_TIME` | Configurable cache expiration time for Open Graph tags | Duration | `24h` | `OG_EXPIRY_TIME=1h` |
| Name | Description | Type | Default | Example |
| ------------------------ | --------------------------------------------------------- | -------- | ------- | ----------------------------- |
| `OG_PASSTHROUGH` | Enables or disables the Open Graph tag passthrough system | Boolean | `true` | `OG_PASSTHROUGH=true` |
| `OG_EXPIRY_TIME` | Configurable cache expiration time for Open Graph tags | Duration | `24h` | `OG_EXPIRY_TIME=1h` |
| `OG_CACHE_CONSIDER_HOST` | Enables or disables the use of the host in the cache key | Boolean | `false` | `OG_CACHE_CONSIDER_HOST=true` |
## Usage
@@ -21,6 +22,7 @@ To configure Open Graph tags, you can set the following environment variables, e
```sh
export OG_PASSTHROUGH=true
export OG_EXPIRY_TIME=1h
export OG_CACHE_CONSIDER_HOST=false
```
## Implementation Details
@@ -33,6 +35,8 @@ When `OG_PASSTHROUGH` is enabled, Anubis will:
The cache expiration time is controlled by `OG_EXPIRY_TIME`.
When `OG_CACHE_CONSIDER_HOST` is enabled, Anubis will include the host in the cache key for Open Graph tags. This ensures that tags are cached separately for different hosts.
## Example
Here is an example of how to configure Open Graph tags in your Anubis setup:
@@ -40,8 +44,19 @@ Here is an example of how to configure Open Graph tags in your Anubis setup:
```sh
export OG_PASSTHROUGH=true
export OG_EXPIRY_TIME=1h
export OG_CACHE_CONSIDER_HOST=false
```
With these settings, Anubis will cache Open Graph tags for 1 hour and pass them through to the challenge page.
With these settings, Anubis will cache Open Graph tags for 1 hour and pass them through to the challenge page, not considering the host in the cache key.
## When to Enable `OG_CACHE_CONSIDER_HOST`
In most cases, you would want to keep `OG_CACHE_CONSIDER_HOST` set to `false` to avoid unnecessary cache fragmentation. However, there are some scenarios where enabling this option can be beneficial:
1. **Multi-Tenant Applications**: If you are running a multi-tenant application where different tenants are hosted on different subdomains, enabling `OG_CACHE_CONSIDER_HOST` ensures that the Open Graph tags are cached separately for each tenant. This prevents one tenant's Open Graph tags from being served to another tenant's users.
2. **Different Content for Different Hosts**: If your application serves different content based on the host, enabling `OG_CACHE_CONSIDER_HOST` ensures that the correct Open Graph tags are cached and served for each host. This is useful for applications that have different branding or content for different domains or subdomains.
3. **Security and Privacy Concerns**: In some cases, you may want to ensure that Open Graph tags are not shared between different hosts for security or privacy reasons. Enabling `OG_CACHE_CONSIDER_HOST` ensures that the tags are cached separately for each host, preventing any potential leakage of information between hosts.
For more information, refer to the [installation guide](../installation).

View File

@@ -0,0 +1,94 @@
---
title: Redirect Domain Configuration
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
Anubis has an HTTP redirect in the middle of its check validation logic. This redirect allows Anubis to set a cookie on validated requests so that users don't need to pass challenges on every page load.
This flow looks something like this:
```mermaid
sequenceDiagram
participant User
participant Challenge
participant Validation
participant Backend
User->>+Challenge: GET /
Challenge->>+User: Solve this challenge
User->>+Validation: Here's the solution, send me to /
Validation->>+User: Here's a cookie, go to /
User->>+Backend: GET /
```
However, in some cases a sufficiently dedicated attacker could trick a user into clicking on a validation link with a solution pre-filled out. For example:
```mermaid
sequenceDiagram
participant Hacker
participant User
participant Validation
participant Evil Site
Hacker->>+User: Click on yoursite.com with this solution
User->>+Validation: Here's a solution, send me to evilsite.com
Validation->>+User: Here's a cookie, go to evilsite.com
User->>+Evil Site: GET evilsite.com
```
If this happens, Anubis will throw an error like this:
```text
Redirect domain not allowed
```
## Configuring allowed redirect domains
By default, Anubis will limit redirects to be on the same HTTP Host that Anubis is running on (EG: requests to yoursite.com cannot redirect outside of yoursite.com). If you need to set more than one domain, fill the `REDIRECT_DOMAINS` environment variable with a comma-separated list of domain names that Anubis should allow redirects to.
:::note
These domains are _an exact string match_, they do not support wildcard matches.
:::
<Tabs>
<TabItem value="env-file" label="Environment file" default>
```shell
# anubis.env
REDIRECT_DOMAINS="yoursite.com,secretplans.yoursite.com"
# ...
```
</TabItem>
<TabItem value="docker-compose" label="Docker Compose">
```yaml
services:
anubis-nginx:
image: ghcr.io/techarohq/anubis:latest
environment:
REDIRECT_DOMAINS: "yoursite.com,secretplans.yoursite.com"
# ...
```
</TabItem>
<TabItem value="k8s" label="Kubernetes">
Inside your Deployment, StatefulSet, or Pod:
```yaml
- name: anubis
image: ghcr.io/techarohq/anubis:latest
env:
- name: REDIRECT_DOMAINS
value: "yoursite.com,secretplans.yoursite.com"
# ...
```
</TabItem>
</Tabs>

View File

@@ -0,0 +1,139 @@
---
title: Subrequest Authentication
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
Anubis can act in one of two modes:
1. Reverse proxy (the default): Anubis sits in the middle of all traffic and then will reverse proxy it to its destination. This is the moral equivalent of a middleware in your favorite web framework.
2. Subrequest authentication mode: Anubis listens for requests and if they don't pass muster then they are forwarded to Anubis for challenge processing. This is the equivalent of Anubis being a sidecar service.
## Nginx
Anubis can perform [subrequest authentication](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/) with the `auth_request` module in Nginx. In order to set this up, keep the following things in mind:
The `TARGET` environment variable in Anubis must be set to a space, eg:
<Tabs>
<TabItem value="env-file" label="Environment file" default>
```shell
# anubis.env
TARGET=" "
# ...
```
</TabItem>
<TabItem value="docker-compose" label="Docker Compose">
```yaml
services:
anubis-nginx:
image: ghcr.io/techarohq/anubis:latest
environment:
TARGET: " "
# ...
```
</TabItem>
<TabItem value="k8s" label="Kubernetes">
Inside your Deployment, StatefulSet, or Pod:
```yaml
- name: anubis
image: ghcr.io/techarohq/anubis:latest
env:
- name: TARGET
value: " "
# ...
```
</TabItem>
</Tabs>
In order to configure this, you need to add the following location blocks to each server pointing to the service you want to protect:
```nginx
location /.within.website/ {
# Assumption: Anubis is running in the same network namespace as
# nginx on localhost TCP port 8923
proxy_pass http://127.0.0.1:8923;
auth_request off;
}
location @redirectToAnubis {
return 307 /.within.website/?redir=$scheme://$host$request_uri;
auth_request off;
}
```
This sets up `/.within.website` to point to Anubis. Any requests that Anubis rejects or throws a challenge to will be sent here. This also sets up a named location `@redirectToAnubis` that will redirect any requests to Anubis for advanced processing.
Finally, add this to your root location block:
```nginx
location / {
# diff-add
auth_request /.within.website/x/cmd/anubis/api/check;
# diff-add
error_page 401 = @redirectToAnubis;
}
```
This will check all requests that don't match other locations with Anubis to ensure the client is genuine.
This will make every request get checked by Anubis before it hits your backend. If you have other locations that don't need Anubis to do validation, add the `auth_request off` directive to their blocks:
```nginx
location /secret {
# diff-add
auth_request off;
# ...
}
```
Here is a complete example of an Nginx server listening over TLS and pointing to Anubis:
<details>
<summary>Complete example</summary>
```nginx
# /etc/nginx/conf.d/nginx.local.cetacean.club.conf
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name nginx.local.cetacean.club;
ssl_certificate /etc/techaro/pki/nginx.local.cetacean.club/tls.crt;
ssl_certificate_key /etc/techaro/pki/nginx.local.cetacean.club/tls.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location /.within.website/ {
proxy_pass http://localhost:8923;
auth_request off;
}
location @redirectToAnubis {
return 307 /.within.website/?redir=$scheme://$host$request_uri;
auth_request off;
}
location / {
auth_request /.within.website/x/cmd/anubis/api/check;
error_page 401 = @redirectToAnubis;
root /usr/share/nginx/html;
index index.html index.htm;
}
}
```
</details>

View File

@@ -0,0 +1,92 @@
---
title: Default allow behavior
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
# Default allow behavior
Anubis is designed to be as unintrusive as possible to your existing infrastructure.
By default, it allows all traffic unless a request matches a rule that explicitly denies or challenges it.
Only requests matching a DENY or CHALLENGE rule are blocked or challenged. All other requests are allowed. This is called "the implicit rule".
## Example: Minimal policy
If your policy only blocks a specific bot, all other requests will be allowed:
<Tabs>
<TabItem value="json" label="JSON" default>
```json
{
"bots": [
{
"name": "block-amazonbot",
"user_agent_regex": "Amazonbot",
"action": "DENY"
}
]
}
```
</TabItem>
<TabItem value="yaml" label="YAML">
```yaml
- name: block-amazonbot
user_agent_regex: Amazonbot
action: DENY
```
</TabItem>
</Tabs>
## How to deny by default
If you want to deny all traffic except what you explicitly allow, add a catch-all deny rule at the end of your policy list. Make sure to add ALLOW rules for any traffic you want to permit before this rule.
<Tabs>
<TabItem value="json" label="JSON" default>
```json
{
"bots": [
{
"name": "allow-goodbot",
"user_agent_regex": "GoodBot",
"action": "ALLOW"
},
{
"name": "catch-all-deny",
"path_regex": ".*",
"action": "DENY"
}
]
}
```
</TabItem>
<TabItem value="yaml" label="YAML">
```yaml
- name: allow-goodbot
user_agent_regex: GoodBot
action: ALLOW
- name: catch-all-deny
path_regex: .*
action: DENY
```
</TabItem>
</Tabs>
## Final remarks
- Rules are evaluated in order; the first match wins.
- The implicit allow rule is always last and cannot be removed.
- Use your logs to monitor what traffic is being allowed by default.
See [Policy Definitions](./policies) for more details on writing rules.

View File

@@ -0,0 +1,8 @@
{
"label": "Environments",
"position": 20,
"link": {
"type": "generated-index",
"description": "Detailed information about individual environments (such as HTTP servers, platforms, etc.) Anubis is known to work with."
}
}

View File

@@ -0,0 +1,151 @@
# Apache
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
Anubis is intended to be a filter proxy. The way to integrate this is to break your configuration up into two parts: TLS termination and then HTTP routing. Consider this diagram:
```mermaid
---
title: Apache as tls terminator and HTTP router
---
flowchart LR
T(User Traffic)
subgraph Apache 2
TCP(TCP 80/443)
US(TCP 3001)
end
An(Anubis)
B(Backend)
T --> |TLS termination| TCP
TCP --> |Traffic filtering| An
An --> |Happy traffic| US
US --> |whatever you're doing| B
```
Effectively you have one trip through Apache to do TLS termination, a detour through Anubis for traffic scrubbing, and then going to the backend directly. This final socket is what will do HTTP routing.
:::note
These examples assume that you are using a setup where your nginx configuration is made up of a bunch of files in `/etc/httpd/conf.d/*.conf`. This is not true for all deployments of Apache. If you are not in such an environment, append these snippets to your `/etc/httpd/conf/httpd.conf` file.
:::
## Dependencies
Install the following dependencies for proxying HTTP:
<Tabs>
<TabItem value="rpm" label="Red Hat / RPM" default>
```text
dnf -y install mod_proxy_html
```
</TabItem>
<TabItem value="deb" label="Debian / Ubuntu / apt">
```text
apt-get install -y libapache2-mod-proxy-html libxml2-dev
```
</TabItem>
</Tabs>
## Configuration
Assuming you are protecting `anubistest.techaro.lol`, you need the following server configuration blocks:
1. A block on port 80 that forwards HTTP to HTTPS
2. A block on port 443 that terminates TLS and forwards to Anubis
3. A block on port 3001 that actually serves your websites
```text
# Plain HTTP redirect to HTTPS
<VirtualHost *:80>
ServerAdmin your@email.here
ServerName anubistest.techaro.lol
DocumentRoot /var/www/anubistest.techaro.lol
ErrorLog /var/log/httpd/anubistest.techaro.lol_error.log
CustomLog /var/log/httpd/anubistest.techaro.lol_access.log combined
RewriteEngine on
RewriteCond %{SERVER_NAME} =anubistest.techaro.lol
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>
# HTTPS listener that forwards to Anubis
<VirtualHost *:443>
ServerAdmin your@email.here
ServerName anubistest.techaro.lol
DocumentRoot /var/www/anubistest.techaro.lol
ErrorLog /var/log/httpd/anubistest.techaro.lol_error.log
CustomLog /var/log/httpd/anubistest.techaro.lol_access.log combined
SSLCertificateFile /etc/letsencrypt/live/anubistest.techaro.lol/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/anubistest.techaro.lol/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
# These headers need to be set or else Anubis will
# throw an "admin misconfiguration" error.
RequestHeader set "X-Real-Ip" expr=%{REMOTE_ADDR}
RequestHeader set X-Forwarded-Proto "https"
ProxyPreserveHost On
ProxyRequests Off
ProxyVia Off
# Replace 9000 with the port Anubis listens on
ProxyPass / http://[::1]:9000/
ProxyPassReverse / http://[::1]:9000/
</VirtualHost>
</IfModule>
# Actual website config
<VirtualHost *:3001>
ServerAdmin your@email.here
ServerName anubistest.techaro.lol
DocumentRoot /var/www/anubistest.techaro.lol
ErrorLog /var/log/httpd/anubistest.techaro.lol_error.log
CustomLog /var/log/httpd/anubistest.techaro.lol_access.log combined
</VirtualHost>
```
Make sure to add a separate configuration file for the listener on port 3001:
```text
# /etc/httpd/conf.d/listener-3001.conf
Listen 3001
```
This can be repeated for multiple sites. Anubis does not care about the HTTP `Host` header and will happily cope with multiple websites via the same instance.
Then reload your Apache config and load your website. You should see Anubis protecting your apps!
```text
sudo systemctl reload httpd.service
```
## Troubleshooting
Here are some answers to questions that came in in testing:
### I'm running on a Red Hat distribution and Apache is saying "service unavailable" for every page load
If you see a "Service unavailable" error on every page load and run a Red Hat derived distribution, you are missing a `selinux` setting. The exact command will be in a journalctl log message like this:
```text
***** Plugin catchall_boolean (89.3 confidence) suggests ******************
If you want to allow HTTPD scripts and modules to connect to the network using TCP.
Then you must tell SELinux about this by enabling the 'httpd_can_network_connect' boolean.
Do
setsebool -P httpd_can_network_connect 1
```
This will fix the error immediately.

View File

@@ -0,0 +1,26 @@
# Docker compose
Docker compose is typically used in concert with other load balancers such as [Apache](./apache.mdx) or [Nginx](./nginx.mdx). Below is a minimal example showing you how to set up an instance of Anubis listening on host port 8080 that points to a static website containing data in `./www`:
```yaml
services:
anubis-nginx:
image: ghcr.io/techarohq/anubis:latest
environment:
BIND: ":8080"
DIFFICULTY: "4"
METRICS_BIND: ":9090"
SERVE_ROBOTS_TXT: "true"
TARGET: "http://nginx"
POLICY_FNAME: "/data/cfg/botPolicy.yaml"
OG_PASSTHROUGH: "true"
OG_EXPIRY_TIME: "24h"
ports:
- 8080:8080
volumes:
- "./botPolicy.yaml:/data/cfg/botPolicy.yaml:ro"
nginx:
image: nginx
volumes:
- "./www:/usr/share/nginx/html"
```

View File

@@ -0,0 +1,128 @@
# Kubernetes
When setting up Anubis in Kubernetes, you want to make sure that you thread requests through Anubis kinda like this:
```mermaid
---
title: Anubis embedded into workload pods
---
flowchart LR
T(User Traffic)
IngressController(IngressController)
subgraph Service
AnPort(Anubis Port)
BPort(Backend Port)
end
subgraph Pod
An(Anubis)
B(Backend)
end
T --> IngressController
IngressController --> AnPort
AnPort --> An
An --> B
```
Anubis is lightweight enough that you should be able to have many instances of it running without many problems. If this is a concern for you, please check out [ingress-anubis](https://github.com/jaredallard/ingress-anubis?ref=anubis.techaro.lol).
This example makes the following assumptions:
- Your target service is listening on TCP port `5000`.
- Anubis will be listening on port `8080`.
Adjust these values as facts and circumstances demand.
Create a secret with the signing key Anubis should use for its responses:
```
kubectl create secret generic anubis-key \
--namespace default \
--from-literal=ED25519_PRIVATE_KEY_HEX=$(openssl rand -hex 32)
```
Attach Anubis to your Deployment:
```yaml
containers:
# ...
- name: anubis
image: ghcr.io/techarohq/anubis:latest
imagePullPolicy: Always
env:
- name: "BIND"
value: ":8080"
- name: "DIFFICULTY"
value: "4"
- name: ED25519_PRIVATE_KEY_HEX
valueFrom:
secretKeyRef:
name: anubis-key
key: ED25519_PRIVATE_KEY_HEX
- name: "METRICS_BIND"
value: ":9090"
- name: "SERVE_ROBOTS_TXT"
value: "true"
- name: "TARGET"
value: "http://localhost:5000"
- name: "OG_PASSTHROUGH"
value: "true"
- name: "OG_EXPIRY_TIME"
value: "24h"
resources:
limits:
cpu: 750m
memory: 256Mi
requests:
cpu: 250m
memory: 256Mi
securityContext:
runAsUser: 1000
runAsGroup: 1000
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
```
Then add a Service entry for Anubis:
```yaml
# ...
spec:
ports:
# diff-add
- protocol: TCP
# diff-add
port: 8080
# diff-add
targetPort: 8080
# diff-add
name: anubis
```
Then point your Ingress to the Anubis port:
```yaml
rules:
- host: git.xeserv.us
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: git
port:
# diff-remove
name: http
# diff-add
name: anubis
```

View File

@@ -0,0 +1,166 @@
# Nginx
Anubis is intended to be a filter proxy. The way to integrate this with nginx is to break your configuration up into two parts: TLS termination and then HTTP routing. Consider this diagram:
```mermaid
---
title: Nginx as tls terminator and HTTP router
---
flowchart LR
T(User Traffic)
subgraph Nginx
TCP(TCP 80/443)
US(Unix Socket or
another TCP port)
end
An(Anubis)
B(Backend)
T --> |TLS termination| TCP
TCP --> |Traffic filtering| An
An --> |Happy traffic| US
US --> |whatever you're doing| B
```
Instead of your traffic going right from TLS termination into the backend, it takes a detour through Anubis. Anubis filters out the "bad" traffic and then passes the "good" traffic to another socket that Nginx has open. This final socket is what you will use to do HTTP routing.
Effectively, you have two roles for nginx: TLS termination (converting HTTPS to HTTP) and HTTP routing (distributing requests to the individual vhosts). This can stack with something like Apache in case you have a legacy deployment. Make sure you have the right [TLS certificates configured](https://code.kuederle.com/letsencrypt/) at the TLS termination level.
:::note
These examples assume that you are using a setup where your nginx configuration is made up of a bunch of files in `/etc/nginx/conf.d/*.conf`. This is not true for all deployments of nginx. If you are not in such an environment, append these snippets to your `/etc/nginx/nginx.conf` file.
:::
Assuming that we are protecting `anubistest.techaro.lol`, here's what the server configuration file would look like:
```nginx
# /etc/nginx/conf.d/server-anubistest-techaro-lol.conf
# HTTP - Redirect all HTTP traffic to HTTPS
server {
listen 80;
listen [::]:80;
server_name anubistest.techaro.lol;
location / {
return 301 https://$host$request_uri;
}
}
# TLS termination server, this will listen over TLS (https) and then
# proxy all traffic to the target via Anubis.
server {
# Listen on TCP port 443 with TLS (https) and HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://anubis;
}
server_name anubistest.techaro.lol;
ssl_certificate /path/to/your/certs/anubistest.techaro.lol.crt;
ssl_certificate_key /path/to/your/certs/anubistest.techaro.lol.key;
}
# Backend server, this is where your webapp should actually live.
server {
listen unix:/run/nginx/nginx.sock;
server_name anubistest.techaro.lol;
root "/srv/http/anubistest.techaro.lol";
index index.html;
# Your normal configuration can go here
# location .php { fastcgi...} etc.
}
```
:::tip
You can copy the `location /` block into a separate file named something like `conf-anubis.inc` and then include it inline to other `server` blocks:
```nginx
# /etc/nginx/conf.d/conf-anubis.inc
# Forward to anubis
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://anubis;
}
```
Then in a server block:
<details>
<summary>Full nginx config</summary>
```nginx
# /etc/nginx/conf.d/server-mimi-techaro-lol.conf
server {
# Listen on 443 with SSL
listen 443 ssl http2;
listen [::]:443 ssl http2;
# Slipstream via Anubis
include "conf-anubis.inc";
server_name mimi.techaro.lol;
ssl_certificate /path/to/your/certs/mimi.techaro.lol.crt;
ssl_certificate_key /path/to/your/certs/mimi.techaro.lol.key;
}
server {
listen unix:/run/nginx/nginx.sock;
server_name mimi.techaro.lol;
root "/srv/http/mimi.techaro.lol";
index index.html;
# Your normal configuration can go here
# location .php { fastcgi...} etc.
}
```
</details>
:::
Create an upstream for Anubis.
```nginx
# /etc/nginx/conf.d/upstream-anubis.conf
upstream anubis {
# Make sure this matches the values you set for `BIND` and `BIND_NETWORK`.
# If this does not match, your services will not be protected by Anubis.
# Try anubis first over a UNIX socket
server unix:/run/anubis/nginx.sock;
#server http://127.0.0.1:8923;
# Optional: fall back to serving the websites directly. This allows your
# websites to be resilient against Anubis failing, at the risk of exposing
# them to the raw internet without protection. This is a tradeoff and can
# be worth it in some edge cases.
#server unix:/run/nginx.sock backup;
}
```
This can be repeated for multiple sites. Anubis does not care about the HTTP `Host` header and will happily cope with multiple websites via the same instance.
Then reload your nginx config and load your website. You should see Anubis protecting your apps!
```text
sudo systemctl reload nginx.service
```

View File

@@ -0,0 +1,215 @@
---
id: traefik
title: Traefik
---
:::note
This only talks about integration through Compose,
but it also applies to docker cli options.
:::
Currently, Anubis doesn't have any Traefik middleware,
so you need to manually route it between Traefik and your target service.
This routing is done per labels in Traefik.
In this example, we will use 4 Containers:
- `traefik` - the Traefik instance
- `anubis` - the Anubis instance
- `target` - our service to protect (`traefik/whoami` in this case)
- `target2` - a second service that isn't supposed to be protected (`traefik/whoami` in this case)
There are 3 steps we need to follow:
1. Create a new exclusive Traefik endpoint for Anubis
2. Pass all unspecified requests to Anubis
3. Let Anubis pass all verified requests back to Traefik on its exclusive endpoint
## Diagram of Flow
This is a small diagram depicting the flow.
Keep in mind that `8080` or `80` can be anything depending on your containers.
```mermaid
flowchart LR
user[User]
traefik[Traefik]
anubis[Anubis]
target[Target]
user-->|:443 - Requesting Service|traefik
traefik-->|:8080 - Passing to Anubis|anubis
anubis-->|:3923 - Passing back to Traefik|traefik
traefik-->|:80 - Passing to the target|target
```
## Create an Exclusive Anubis Endpoint in Traefik
There are 2 ways of registering a new endpoint in Traefik.
Which one to use depends on how you configured your Traefik so far.
**CLI Options:**
```yml
--entrypoints.anubis.address=:3923
```
**traefik.yml:**
```yml
entryPoints:
anubis:
address: ":3923"
```
It is important that the specified port isn't actually reachable from the outside,
but only exposed in the Docker network.
Exposing the Anubis port on Traefik directly will allow direct unprotected access to all containers behind it.
## Passing all unspecified Web Requests to Anubis
There are cases where you want Traefik to still route some requests without protection, just like before.
To achieve this, we can register Anubis as the default handler for non-protected requests.
We also don't want users to get SSL Errors during the checking phase,
thus we also need to let Traefik provide SSL Certs for our endpoint.
This example expects an TLS cert resolver called `le`.
We also expect there to be an endpoint called `websecure` for HTTPS in this example.
This is an example of the required labels to configure Traefik on the Anubis container:
```yml
labels:
- traefik.enable=true # Enabling Traefik
- traefik.docker.network=traefik # Telling Traefik which network to use
- traefik.http.routers.anubis.priority=1 # Setting Anubis to the lowest priority, so it only takes the slack
- traefik.http.routers.anubis.rule=PathRegexp(`.*`) # Wildcard match every path
- traefik.http.routers.anubis.entrypoints=websecure # Listen on HTTPS
- traefik.http.services.anubis.loadbalancer.server.port=8080 # Telling Traefik to which port it should route requests
- traefik.http.routers.anubis.service=anubis # Telling Traefik to use the above specified port
- traefik.http.routers.anubis.tls.certresolver=le # Telling Traefik to resolve a Cert for Anubis
```
## Passing all Verified Requests Back Correctly to Traefik
To pass verified requests back to Traefik,
we only need to configure Anubis using its environment variables:
```yml
environment:
- BIND=:8080
- TARGET=http://traefik:3923
```
## Full Example Config
Now that we know how to pass all requests back and forth, here is the example.
This example contains 2 services: one that is protected and the other one that is not.
**compose.yml**
```yml
services:
traefik:
image: traefik:v3.3
ports:
- 80:80
- 443:443
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./letsencrypt:/letsencrypt
- ./traefik.yml:/traefik.yml:ro
networks:
- traefik
labels:
# Enable Traefik
- traefik.enable=true
- traefik.docker.network=traefik
# Redirect any HTTP to HTTPS
- traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https
- traefik.http.routers.web.rule=PathPrefix(`/`)
- traefik.http.routers.web.entrypoints=web
- traefik.http.routers.web.middlewares=redirect-to-https
- traefik.http.routers.web.tls=false
anubis:
image: ghcr.io/techarohq/anubis:main
environment:
# Telling Anubis, where to listen for Traefik
- BIND=:8080
# Telling Anubis to point to Traefik via the Docker network
- TARGET=http://traefik:3923
networks:
- traefik
labels:
- traefik.enable=true # Enabling Traefik
- traefik.docker.network=traefik # Telling Traefik which network to use
- traefik.http.routers.anubis.priority=1 # Setting Anubis to the lowest priority, so it only takes the slack
- traefik.http.routers.anubis.rule=PathRegexp(`.*`) # wildcard match anything
- traefik.http.routers.anubis.entrypoints=websecure # Listen on HTTPS
- traefik.http.services.anubis.loadbalancer.server.port=8080 # Telling Traefik to which port it should route requests
- traefik.http.routers.anubis.service=anubis # Telling Traefik to use the above specified port
- traefik.http.routers.anubis.tls.certresolver=le # Telling Traefik to resolve a Cert for Anubis
# Protected by Anubis
target:
image: traefik/whoami:latest
networks:
- traefik
labels:
- traefik.enable=true # Enabling Traefik
- traefik.docker.network=traefik # Telling Traefik which network to use
- traefik.http.routers.target.rule=Host(`example.com`) # Only Matching Requests for example.com
- traefik.http.routers.target.entrypoints=anubis # Listening on the exclusive Anubis Network
- traefik.http.services.target.loadbalancer.server.port=80 # Telling Traefik where to receive requests
- traefik.http.routers.target.service=target # Telling Traefik to use the above specified port
# Not Protected by Anubis
target2:
image: traefik/whoami:latest
networks:
- traefik
labels:
- traefik.enable=true # Eneabling Traefik
- traefik.docker.network=traefik # Telling Traefik which network to use
- traefik.http.routers.target2.rule=Host(`another.com`) # Only Matching Requests for example.com
- traefik.http.routers.target2.entrypoints=websecure # Listening on the exclusive Anubis Network
- traefik.http.services.target2.loadbalancer.server.port=80 # Telling Traefik where to receive requests
- traefik.http.routers.target2.service=target2 # Telling Traefik to use the above specified port
- traefik.http.routers.target2.tls.certresolver=le # Telling Traefik to resolve a Cert for this Target
networks:
traefik:
name: traefik
```
**traefik.yml**
```yml
api:
insecure: false # shouldn't be enabled in prod
entryPoints:
# Web
web:
address: ":80"
websecure:
address: ":443"
# Anubis
anubis:
address: ":3923"
certificatesResolvers:
le:
acme:
tlsChallenge: {}
email: "admin@example.com"
storage: "/letsencrypt/acme.json"
providers:
docker: {}
```

View File

@@ -4,6 +4,9 @@ title: Setting up Anubis
import RandomKey from "@site/src/components/RandomKey";
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
Anubis is meant to sit between your reverse proxy (such as Nginx or Caddy) and your target service. One instance of Anubis must be used per service you are protecting.
<center>
@@ -38,32 +41,76 @@ The Docker image runs Anubis as user ID 1000 and group ID 1000. If you are mount
Anubis has very minimal system requirements. I suspect that 128Mi of ram may be sufficient for a large number of concurrent clients. Anubis may be a poor fit for apps that use WebSockets and maintain open connections, but I don't have enough real-world experience to know one way or another.
## Native packages
For more detailed information on installing Anubis with native packages, please read [the native install directions](./native-install.mdx).
## Environment variables
Anubis uses these environment variables for configuration:
| Environment Variable | Default value | Explanation |
| :----------------------------- | :---------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `BIND` | `:8923` | The network address that Anubis listens on. For `unix`, set this to a path: `/run/anubis/instance.sock` |
| `BIND_NETWORK` | `tcp` | The address family that Anubis listens on. Accepts `tcp`, `unix` and anything Go's [`net.Listen`](https://pkg.go.dev/net#Listen) supports. |
| `COOKIE_DOMAIN` | unset | The domain the Anubis challenge pass cookie should be set to. This should be set to the domain you bought from your registrar (EG: `techaro.lol` if your webapp is running on `anubis.techaro.lol`). See [here](https://stackoverflow.com/a/1063760) for more information. |
| `COOKIE_PARTITIONED` | `false` | If set to `true`, enables the [partitioned (CHIPS) flag](https://developers.google.com/privacy-sandbox/cookies/chips), meaning that Anubis inside an iframe has a different set of cookies than the domain hosting the iframe. |
| `DIFFICULTY` | `5` | The difficulty of the challenge, or the number of leading zeroes that must be in successful responses. |
| `ED25519_PRIVATE_KEY_HEX` | unset | The hex-encoded ed25519 private key used to sign Anubis responses. If this is not set, Anubis will generate one for you. This should be exactly 64 characters long. See below for details. |
| `ED25519_PRIVATE_KEY_HEX_FILE` | unset | Path to a file containing the hex-encoded ed25519 private key. Only one of this or its sister option may be set. |
| `METRICS_BIND` | `:9090` | The network address that Anubis serves Prometheus metrics on. See `BIND` for more information. |
| `METRICS_BIND_NETWORK` | `tcp` | The address family that the Anubis metrics server listens on. See `BIND_NETWORK` for more information. |
| `OG_EXPIRY_TIME` | `24h` | The expiration time for the Open Graph tag cache. |
| `OG_PASSTHROUGH` | `false` | If set to `true`, Anubis will enable Open Graph tag passthrough. |
| `POLICY_FNAME` | unset | The file containing [bot policy configuration](./policies.md). See the bot policy documentation for more details. If unset, the default bot policy configuration is used. |
| `SERVE_ROBOTS_TXT` | `false` | If set `true`, Anubis will serve a default `robots.txt` file that disallows all known AI scrapers by name and then additionally disallows every scraper. This is useful if facts and circumstances make it difficult to change the underlying service to serve such a `robots.txt` file. |
| `SOCKET_MODE` | `0770` | _Only used when at least one of the `*_BIND_NETWORK` variables are set to `unix`._ The socket mode (permissions) for Unix domain sockets. |
| `TARGET` | `http://localhost:3923` | The URL of the service that Anubis should forward valid requests to. Supports Unix domain sockets, set this to a URI like so: `unix:///path/to/socket.sock`. |
| `USE_REMOTE_ADDRESS` | unset | If set to `true`, Anubis will take the client's IP from the network socket. For production deployments, it is expected that a reverse proxy is used in front of Anubis, which pass the IP using headers, instead. |
| `WEBMASTER_EMAIL` | unset | If set, shows a contact email address when rendering error pages. This email address will be how users can get in contact with administrators. |
| Environment Variable | Default value | Explanation |
| :----------------------------- | :---------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `BASE_PREFIX` | unset | If set, adds a global prefix to all Anubis endpoints. For example, setting this to `/myapp` would make Anubis accessible at `/myapp/` instead of `/`. This is useful when running Anubis behind a reverse proxy that routes based on path prefixes. |
| `BIND` | `:8923` | The network address that Anubis listens on. For `unix`, set this to a path: `/run/anubis/instance.sock` |
| `BIND_NETWORK` | `tcp` | The address family that Anubis listens on. Accepts `tcp`, `unix` and anything Go's [`net.Listen`](https://pkg.go.dev/net#Listen) supports. |
| `COOKIE_DOMAIN` | unset | The domain the Anubis challenge pass cookie should be set to. This should be set to the domain you bought from your registrar (EG: `techaro.lol` if your webapp is running on `anubis.techaro.lol`). See [here](https://stackoverflow.com/a/1063760) for more information. |
| `COOKIE_EXPIRATION_TIME` | `168h` | The amount of time the authorization cookie is valid for. |
| `COOKIE_PARTITIONED` | `false` | If set to `true`, enables the [partitioned (CHIPS) flag](https://developers.google.com/privacy-sandbox/cookies/chips), meaning that Anubis inside an iframe has a different set of cookies than the domain hosting the iframe. |
| `DIFFICULTY` | `4` | The difficulty of the challenge, or the number of leading zeroes that must be in successful responses. |
| `ED25519_PRIVATE_KEY_HEX` | unset | The hex-encoded ed25519 private key used to sign Anubis responses. If this is not set, Anubis will generate one for you. This should be exactly 64 characters long. See below for details. |
| `ED25519_PRIVATE_KEY_HEX_FILE` | unset | Path to a file containing the hex-encoded ed25519 private key. Only one of this or its sister option may be set. |
| `METRICS_BIND` | `:9090` | The network address that Anubis serves Prometheus metrics on. See `BIND` for more information. |
| `METRICS_BIND_NETWORK` | `tcp` | The address family that the Anubis metrics server listens on. See `BIND_NETWORK` for more information. |
| `OG_EXPIRY_TIME` | `24h` | The expiration time for the Open Graph tag cache. |
| `OG_PASSTHROUGH` | `false` | If set to `true`, Anubis will enable Open Graph tag passthrough. |
| `OG_CACHE_CONSIDER_HOST` | `false` | If set to `true`, Anubis will consider the host in the Open Graph tag cache key. |
| `POLICY_FNAME` | unset | The file containing [bot policy configuration](./policies.mdx). See the bot policy documentation for more details. If unset, the default bot policy configuration is used. |
| `REDIRECT_DOMAINS` | unset | If set, restrict the domains that Anubis can redirect to when passing a challenge.<br/><br/>If this is unset, Anubis may redirect to any domain which could cause security issues in the unlikely case that an attacker passes a challenge for your browser and then tricks you into clicking a link to your domain. |
| `SERVE_ROBOTS_TXT` | `false` | If set `true`, Anubis will serve a default `robots.txt` file that disallows all known AI scrapers by name and then additionally disallows every scraper. This is useful if facts and circumstances make it difficult to change the underlying service to serve such a `robots.txt` file. |
| `SOCKET_MODE` | `0770` | _Only used when at least one of the `*_BIND_NETWORK` variables are set to `unix`._ The socket mode (permissions) for Unix domain sockets. |
| `TARGET` | `http://localhost:3923` | The URL of the service that Anubis should forward valid requests to. Supports Unix domain sockets, set this to a URI like so: `unix:///path/to/socket.sock`. |
| `USE_REMOTE_ADDRESS` | unset | If set to `true`, Anubis will take the client's IP from the network socket. For production deployments, it is expected that a reverse proxy is used in front of Anubis, which pass the IP using headers, instead. |
| `WEBMASTER_EMAIL` | unset | If set, shows a contact email address when rendering error pages. This email address will be how users can get in contact with administrators. |
For more detailed information on configuring Open Graph tags, please refer to the [Open Graph Configuration](./configuration/open-graph.mdx) page.
### Using Base Prefix
The `BASE_PREFIX` environment variable allows you to run Anubis behind a path prefix. This is useful when:
- You want to host multiple services on the same domain
- You're using a reverse proxy that routes based on path prefixes
- You need to integrate Anubis with an existing application structure
For example, if you set `BASE_PREFIX=/myapp`, Anubis will:
- Serve its challenge page at `/myapp/` instead of `/`
- Serve its API endpoints at `/myapp/.within.website/x/cmd/anubis/api/` instead of `/.within.website/x/cmd/anubis/api/`
- Serve its static assets at `/myapp/.within.website/x/cmd/anubis/` instead of `/.within.website/x/cmd/anubis/`
When using this feature with a reverse proxy:
1. Configure your reverse proxy to route requests for the specified path prefix to Anubis
2. Set the `BASE_PREFIX` environment variable to match the path prefix in your reverse proxy configuration
3. Ensure that your reverse proxy preserves the path when forwarding requests to Anubis
Example with Nginx:
```nginx
location /myapp/ {
proxy_pass http://anubis:8923/myapp;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
```
With corresponding Anubis configuration:
```
BASE_PREFIX=/myapp
```
### Key generation
To generate an ed25519 private key, you can use this command:
@@ -76,113 +123,18 @@ Alternatively here is a key generated by your browser:
<RandomKey />
## Docker compose
## Next steps
Add Anubis to your compose file pointed at your service:
To get Anubis filtering your traffic, you need to make sure it's added to your HTTP load balancer or platform configuration. See the [environments category](/docs/category/environments) for detailed information on individual environments.
```yaml
services:
anubis-nginx:
image: ghcr.io/techarohq/anubis:latest
environment:
BIND: ":8080"
DIFFICULTY: "5"
METRICS_BIND: ":9090"
SERVE_ROBOTS_TXT: "true"
TARGET: "http://nginx"
POLICY_FNAME: "/data/cfg/botPolicy.json"
OG_PASSTHROUGH: "true"
OG_EXPIRY_TIME: "24h"
ports:
- 8080:8080
volumes:
- "./botPolicy.json:/data/cfg/botPolicy.json:ro"
nginx:
image: nginx
volumes:
- "./www:/usr/share/nginx/html"
```
- [Apache](./environments/apache.mdx)
- [Docker compose](./environments/docker-compose.mdx)
- [Kubernetes](./environments/kubernetes.mdx)
- [Nginx](./environments/nginx.mdx)
- [Traefik](./environments/traefik.mdx)
## Kubernetes
:::note
This example makes the following assumptions:
Anubis loads its assets from `/.within.website/x/xess/` and `/.within.website/x/cmd/anubis`. If you do not reverse proxy these in your server config, Anubis won't work.
- Your target service is listening on TCP port `5000`.
- Anubis will be listening on port `8080`.
Attach Anubis to your Deployment:
```yaml
containers:
# ...
- name: anubis
image: ghcr.io/techarohq/anubis:latest
imagePullPolicy: Always
env:
- name: "BIND"
value: ":8080"
- name: "DIFFICULTY"
value: "5"
- name: "METRICS_BIND"
value: ":9090"
- name: "SERVE_ROBOTS_TXT"
value: "true"
- name: "TARGET"
value: "http://localhost:5000"
- name: "OG_PASSTHROUGH"
value: "true"
- name: "OG_EXPIRY_TIME"
value: "24h"
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 250m
memory: 128Mi
securityContext:
runAsUser: 1000
runAsGroup: 1000
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
```
Then add a Service entry for Anubis:
```yaml
# ...
spec:
ports:
# diff-add
- protocol: TCP
# diff-add
port: 8080
# diff-add
targetPort: 8080
# diff-add
name: anubis
```
Then point your Ingress to the Anubis port:
```yaml
rules:
- host: git.xeserv.us
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: git
port:
# diff-remove
name: http
# diff-add
name: anubis
```
:::

View File

@@ -5,6 +5,8 @@ title: Installing Anubis with a native package
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
Download the package for your system from [the most recent release on GitHub](https://github.com/TecharoHQ/anubis/releases).
Install the Anubis package using your package manager of choice:
<Tabs>
@@ -84,20 +86,20 @@ Once it's installed, make a copy of the default configuration file `/etc/anubis/
sudo cp /etc/anubis/default.env /etc/anubis/gitea.env
```
Copy the default bot policies file to `/etc/anubis/gitea.botPolicies.json`:
Copy the default bot policies file to `/etc/anubis/gitea.botPolicies.yaml`:
<Tabs>
<TabItem value="debrpm" label="Debian or Red Hat" default>
```text
sudo cp /usr/share/doc/anubis/botPolicies.json /etc/anubis/gitea.botPolicies.json
sudo cp /usr/share/doc/anubis/botPolicies.yaml /etc/anubis/gitea.botPolicies.yaml
```
</TabItem>
<TabItem value="tarball" label="Tarball">
```text
sudo cp ./doc/botPolicies.json /etc/anubis/gitea.botPolicies.json
sudo cp ./doc/botPolicies.yaml /etc/anubis/gitea.botPolicies.yaml
```
</TabItem>
@@ -112,7 +114,7 @@ BIND_NETWORK=tcp
DIFFICULTY=4
METRICS_BIND=[::1]:8240
METRICS_BIND_NETWORK=tcp
POLICY_FNAME=/etc/anubis/gitea.botPolicies.json
POLICY_FNAME=/etc/anubis/gitea.botPolicies.yaml
TARGET=http://localhost:3000
```
@@ -129,3 +131,8 @@ curl http://localhost:8240/metrics
```
Then set up your reverse proxy (Nginx, Caddy, etc.) to point to the Anubis port. Anubis will then reverse proxy all requests that meet the policies in `/etc/anubis/gitea.botPolicies.json` to the target service.
For more details on particular reverse proxies, see here:
- [Apache](./environments/apache.mdx)
- [Nginx](./environments/nginx.mdx)

View File

@@ -2,15 +2,25 @@
title: Policy Definitions
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
Out of the box, Anubis is pretty heavy-handed. It will aggressively challenge everything that might be a browser (usually indicated by having `Mozilla` in its user agent). However, some bots are smart enough to get past the challenge. Some things that look like bots may actually be fine (IE: RSS readers). Some resources need to be visible no matter what. Some resources and remotes are fine to begin with.
Bot policies let you customize the rules that Anubis uses to allow, deny, or challenge incoming requests. Currently you can set policies by the following matches:
- Request path
- User agent string
- HTTP request header values
- [Importing other configuration snippets](./configuration/import.mdx)
As of version v1.17.0 or later, configuration can be written in either JSON or YAML.
Here's an example rule that denies [Amazonbot](https://developer.amazon.com/en/amazonbot):
<Tabs>
<TabItem value="json" label="JSON" default>
```json
{
"name": "amazonbot",
@@ -19,15 +29,37 @@ Here's an example rule that denies [Amazonbot](https://developer.amazon.com/en/a
}
```
</TabItem>
<TabItem value="yaml" label="YAML">
```yaml
- name: amazonbot
user_agent_regex: Amazonbot
action: DENY
```
</TabItem>
</Tabs>
When this rule is evaluated, Anubis will check the `User-Agent` string of the request. If it contains `Amazonbot`, Anubis will send an error page to the user saying that access is denied, but in such a way that makes scrapers think they have correctly loaded the webpage.
Right now the only kinds of policies you can write are bot policies. Other forms of policies will be added in the future.
Here is a minimal policy file that will protect against most scraper bots:
<Tabs>
<TabItem value="json" label="JSON" default>
```json
{
"bots": [
{
"name": "cloudflare-workers",
"headers_regex": {
"CF-Worker": ".*"
},
"action": "DENY"
},
{
"name": "well-known",
"path_regex": "^/.well-known/.*$",
@@ -52,9 +84,35 @@ Here is a minimal policy file that will protect against most scraper bots:
}
```
</TabItem>
<TabItem value="yaml" label="YAML">
```yaml
bots:
- name: cloudflare-workers
headers_regex:
CF-Worker: .*
action: DENY
- name: well-known
path_regex: ^/.well-known/.*$
action: ALLOW
- name: favicon
path_regex: ^/favicon.ico$
action: ALLOW
- name: robots-txt
path_regex: ^/robots.txt$
action: ALLOW
- name: generic-browser
user_agent_regex: Mozilla
action: CHALLENGE
```
</TabItem>
</Tabs>
This allows requests to [`/.well-known`](https://en.wikipedia.org/wiki/Well-known_URI), `/favicon.ico`, `/robots.txt`, and challenges any request that has the word `Mozilla` in its User-Agent string. The [default policy file](https://github.com/TecharoHQ/anubis/blob/main/data/botPolicies.json) is a bit more cohesive, but this should be more than enough for most users.
If no rules match the request, it is allowed through.
If no rules match the request, it is allowed through. For more details on this default behavior and its implications, see [Default allow behavior](./default-allow-behavior.mdx).
## Writing your own rules
@@ -72,6 +130,11 @@ Name your rules in lower case using kebab-case. Rule names will be exposed in Pr
Rules can also have their own challenge settings. These are customized using the `"challenge"` key. For example, here is a rule that makes challenges artificially hard for connections with the substring "bot" in their user agent:
<Tabs>
<TabItem value="json" label="JSON" default>
This rule has been known to have a high false positive rate in testing. Please use this with care.
```json
{
"name": "generic-bot-catchall",
@@ -85,6 +148,25 @@ Rules can also have their own challenge settings. These are customized using the
}
```
</TabItem>
<TabItem value="yaml" label="YAML">
This rule has been known to have a high false positive rate in testing. Please use this with care.
```yaml
# Punish any bot with "bot" in the user-agent string
- name: generic-bot-catchall
user_agent_regex: (?i:bot|crawler)
action: CHALLENGE
challenge:
difficulty: 16 # impossible
report_as: 4 # lie to the operator
algorithm: slow # intentionally waste CPU cycles and time
```
</TabItem>
</Tabs>
Challenges can be configured with these settings:
| Key | Example | Description |
@@ -99,6 +181,9 @@ The `remote_addresses` field of a Bot rule allows you to set the IP range that t
For example, you can allow a search engine to connect if and only if its IP address matches the ones they published:
<Tabs>
<TabItem value="json" label="JSON" default>
```json
{
"name": "qwantbot",
@@ -108,8 +193,25 @@ For example, you can allow a search engine to connect if and only if its IP addr
}
```
</TabItem>
<TabItem value="yaml" label="YAML">
```yaml
- name: qwantbot
user_agent_regex: \+https\://help\.qwant\.com/bot/
action: ALLOW
# https://help.qwant.com/wp-content/uploads/sites/2/2025/01/qwantbot.json
remote_addresses: ["91.242.162.0/24"]
```
</TabItem>
</Tabs>
This also works at an IP range level without any other checks:
<Tabs>
<TabItem value="json" label="JSON" default>
```json
{
"name": "internal-network",
@@ -118,6 +220,19 @@ This also works at an IP range level without any other checks:
}
```
</TabItem>
<TabItem value="yaml" label="YAML">
```yaml
name: internal-network
action: ALLOW
remote_addresses:
- 100.64.0.0/10
```
</TabItem>
</Tabs>
## Risk calculation for downstream services
In case your service needs it for risk calculation reasons, Anubis exposes information about the rules that any requests match using a few headers:
@@ -126,6 +241,6 @@ In case your service needs it for risk calculation reasons, Anubis exposes infor
| :---------------- | :--------------------------------------------------- | :--------------- |
| `X-Anubis-Rule` | The name of the rule that was matched | `bot/lightpanda` |
| `X-Anubis-Action` | The action that Anubis took in response to that rule | `CHALLENGE` |
| `X-Anubis-Status` | The status and how strict Anubis was in its checks | `PASS-FULL` |
| `X-Anubis-Status` | The status and how strict Anubis was in its checks | `PASS` |
Policy rules are matched using [Go's standard library regular expressions package](https://pkg.go.dev/regexp). You can mess around with the syntax at [regex101.com](https://regex101.com), make sure to select the Golang option.

View File

@@ -12,6 +12,12 @@ These instructions may work, but for right now they are informative for downstre
If you are doing a build entirely from source, here's what you need to do:
:::info
If you maintain a package for Anubis v1.15.x or older, you will need to update your package build. You may want to use one of the half-baked tarballs if your distro/environment of choice makes it difficult to use npm.
:::
### Tools needed
In order to build a production-ready binary of Anubis, you need the following packages in your environment:

View File

@@ -7,4 +7,4 @@ Anubis is provided to the public for free in order to help advance the common go
If you want to run an unbranded or white-label version of Anubis, please [contact Xe](https://xeiaso.net/contact) to arrange a contract. This is not meant to be "contact us" pricing, I am still evaluating the market for this solution and figuring out what makes sense.
You can donate to the project [on Patreon](https://patreon.com/cadey).
You can donate to the project [on Patreon](https://patreon.com/cadey) or via [GitHub Sponsors](https://github.com/sponsors/Xe).

View File

@@ -15,14 +15,55 @@ title: Anubis
![language count](https://img.shields.io/github/languages/count/TecharoHQ/anubis)
![repo size](https://img.shields.io/github/repo-size/TecharoHQ/anubis)
Anubis [weighs the soul of your connection](https://en.wikipedia.org/wiki/Weighing_of_souls) using a sha256 proof-of-work challenge in order to protect upstream resources from scraper bots.
## Sponsors
Anubis is brought to you by sponsors and donors like:
[![Distrust](/img/sponsors/distrust-logo.webp)](https://distrust.co)
## Overview
Anubis [weighs the soul of your connection](https://en.wikipedia.org/wiki/Weighing_of_souls) using a proof-of-work challenge in order to protect upstream resources from scraper bots.
This program is designed to help protect the small internet from the endless storm of requests that flood in from AI companies. Anubis is as lightweight as possible to ensure that everyone can afford to protect the communities closest to them.
Anubis is a bit of a nuclear response. This will result in your website being blocked from smaller scrapers and may inhibit "good bots" like the Internet Archive. You can configure [bot policy definitions](./admin/policies.md) to explicitly allowlist them and we are working on a curated set of "known good" bots to allow for a compromise between discoverability and uptime.
Anubis is a bit of a nuclear response. This will result in your website being blocked from smaller scrapers and may inhibit "good bots" like the Internet Archive. You can configure [bot policy definitions](https://anubis.techaro.lol/docs/admin/policies) to explicitly allowlist them and we are working on a curated set of "known good" bots to allow for a compromise between discoverability and uptime.
In most cases, you should not need this and can probably get by using Cloudflare to protect a given origin. However, for circumstances where you can't or won't use Cloudflare, Anubis is there for you.
## Support
If you run into any issues running Anubis, please [open an issue](https://github.com/TecharoHQ/anubis/issues/new?template=Blank+issue) and include all the information I would need to diagnose your issue.
For live chat, please join the [Patreon](https://patreon.com/cadey) and ask in the Patron discord in the channel `#anubis`.
For live chat, please join the [Patreon](https://patreon.com/cadey) or join [GitHub Sponsors](https://github.com/sponsors/Xe) and ask in the Patron discord in the channel `#anubis`.
## Star History
<a href="https://www.star-history.com/#TecharoHQ/anubis&Date">
<picture>
<source
media="(prefers-color-scheme: dark)"
srcSet="https://api.star-history.com/svg?repos=TecharoHQ/anubis&type=Date&theme=dark"
/>
<source
media="(prefers-color-scheme: light)"
srcSet="https://api.star-history.com/svg?repos=TecharoHQ/anubis&type=Date"
/>
<img
alt="Star History Chart"
src="https://api.star-history.com/svg?repos=TecharoHQ/anubis&type=Date"
/>
</picture>
</a>
## Packaging Status
[![Packaging status](https://repology.org/badge/vertical-allrepos/anubis-anti-crawler.svg?columns=3)](https://repology.org/project/anubis-anti-crawler/versions)
## Contributors
<a href="https://github.com/TecharoHQ/anubis/graphs/contributors">
<img src="https://contrib.rocks/image?repo=TecharoHQ/anubis" />
</a>
Made with [contrib.rocks](https://contrib.rocks).

View File

@@ -3,17 +3,47 @@ title: List of known browser extensions that can break Anubis
---
This page contains a list of all of the browser extensions that are known to break Anubis' functionality and their associated GitHub issues, along with instructions on how to work around the issue.
## [JShelter](https://jshelter.org/)
| Extension | JShelter |
| :----------- | :-------------------------------------------- |
| Website | [jshelter.org](https://jshelter.org/) |
| GitHub issue | https://github.com/TecharoHQ/anubis/issues/25 |
| Extension | JShelter |
| :----------- | :------------------------------------------------------------------------------------------------------------------------------------------------- |
| Website | [jshelter.org](https://jshelter.org/) |
| GitHub issue | https://github.com/TecharoHQ/anubis/issues/25 |
| Be aware of | [What are Web Workers, and what are the threats that I face?](https://jshelter.org/faq/#what-are-web-workers-and-what-are-the-threats-that-i-face) |
Workaround steps:
### Workaround steps (recommended):
1. Click on the JShelter badge icon (typically in the toolbar next to your navigation bar; if you cannot locate the icon, see [this question](https://jshelter.org/faq/#can-i-see-a-jshelter-badge-icon-next-to-my-navigation-bar-i-want-to-interact-with-the-extension-easily-and-avoid-going-through-settings)).
2. Expand JavaScript Shield settings by clicking on the `Modify` button.
3. Click on the `Detail tweaks of JS shield for this site` button.
4. Click and drag the `WebWorker` slider to the left until `Remove` is replaced by the `Unprotected`.
5. Refresh the page, for example, by clicking on the `Refresh page` button at the top of the JShelter pop up window.
6. You might want to restore the Worker settings once you go through the challenge.
### Workaround steps (alternative if you do not want to dig in JShelter's pop up):
1. Click on the JShelter badge icon (typically in the toolbar next to your navigation bar; if you cannot locate the icon, see [this question](https://jshelter.org/faq/#can-i-see-a-jshelter-badge-icon-next-to-my-navigation-bar-i-want-to-interact-with-the-extension-easily-and-avoid-going-through-settings)).
2. Expand JavaScript Shield settings by clicking on the `Modify` button.
3. Choose "Turn JavaScript Shield off"
4. Refresh the page, for example, by clicking on the `Refresh page` button at the top of the JShelter pop up window.
:::note
Taking these actions will remove all protections of JavaScript Shield for all pages at the visited web site. You might want review and amend your JavaScript shield settings once you go through the challenge based on your operational security model.
:::
### Workaround steps (alternative if you do not like JShelter's pop up):
1. Open JShelter extension settings
2. Click on JS Shield details
3. Enter in the domain for a website protected by Anubis
4. Choose "Turn JavaScript Shield off"
5. Hit "Add to list"
:::note
Taking these actions will remove all protections of JavaScript Shield for all pages at the visited web site. You might want review and amend your JavaScript shield settings once you go through the challenge based on your operational security model.
:::

View File

@@ -0,0 +1,47 @@
---
title: List of known websites using Anubis
---
This page contains a non-exhaustive list with all websites using Anubis.
- <details>
<summary>The Linux Foundation</summary>
- https://git.kernel.org/
- https://lore.kernel.org/
</details>
- https://gitlab.gnome.org/
- https://scioly.org/
- https://bugs.winehq.org/
- https://svnweb.freebsd.org/
- https://trac.ffmpeg.org/
- https://git.sr.ht/
- https://xeiaso.net/
- https://source.puri.sm/
- https://git.enlightenment.org/
- https://superlove.sayitditto.net/
- https://linktaco.com/
- https://jaredallard.dev/
- https://dev.sanctum.geek.nz/
- https://canine.tools/
- https://git.lupancham.net/
- https://dev.haiku-os.org
- http://code.hackerspace.pl/
- https://wiki.archlinux.org/
- https://git.devuan.org/
- https://hydra.nixos.org/
- https://codeberg.org/
- https://www.cfaarchive.org/
- https://forum.freecad.org/
- <details>
<summary>Sourceware</summary>
- https://sourceware.org/cgit
- https://sourceware.org/glibc/wiki
- https://builder.sourceware.org/testruns/
- https://patchwork.sourceware.org/
- https://gcc.gnu.org/bugzilla/
- https://gcc.gnu.org/cgit
</details>
- <details>
<summary>The United Nations</summary>
- https://policytoolbox.iiep.unesco.org/
</details>

View File

@@ -70,6 +70,9 @@ const config: Config = {
],
themeConfig: {
colorMode: {
respectPrefersColorScheme: true,
},
// Replace with your project's social card
image: 'img/docusaurus-social-card.jpg',
navbar: {
@@ -125,10 +128,6 @@ const config: Config = {
{
title: 'More',
items: [
{
label: 'Blog',
to: '/blog',
},
{
label: 'GitHub',
href: 'https://github.com/TecharoHQ/anubis',

12
docs/package-lock.json generated
View File

@@ -8512,9 +8512,9 @@
}
},
"node_modules/estree-util-value-to-estree": {
"version": "3.3.2",
"resolved": "https://registry.npmjs.org/estree-util-value-to-estree/-/estree-util-value-to-estree-3.3.2.tgz",
"integrity": "sha512-hYH1aSvQI63Cvq3T3loaem6LW4u72F187zW4FHpTrReJSm6W66vYTFNO1vH/chmcOulp1HlAj1pxn8Ag0oXI5Q==",
"version": "3.3.3",
"resolved": "https://registry.npmjs.org/estree-util-value-to-estree/-/estree-util-value-to-estree-3.3.3.tgz",
"integrity": "sha512-Db+m1WSD4+mUO7UgMeKkAwdbfNWwIxLt48XF2oFU9emPfXkIu+k5/nlOj313v7wqtAPo0f9REhUvznFrPkG8CQ==",
"license": "MIT",
"dependencies": {
"@types/estree": "^1.0.0"
@@ -10093,9 +10093,9 @@
}
},
"node_modules/http-proxy-middleware": {
"version": "2.0.7",
"resolved": "https://registry.npmjs.org/http-proxy-middleware/-/http-proxy-middleware-2.0.7.tgz",
"integrity": "sha512-fgVY8AV7qU7z/MmXJ/rxwbrtQH4jBQ9m7kp3llF0liB7glmFeVZFBepQb32T3y8n8k2+AEYuMPCpinYW+/CuRA==",
"version": "2.0.9",
"resolved": "https://registry.npmjs.org/http-proxy-middleware/-/http-proxy-middleware-2.0.9.tgz",
"integrity": "sha512-c1IyJYLYppU574+YI7R4QyX2ystMtVXZwIdzazUIPIJsHuWNd+mho2j+bKoHftndicGj9yh+xjd+l0yj7VeT1Q==",
"license": "MIT",
"dependencies": {
"@types/http-proxy": "^1.17.8",

View File

@@ -6,13 +6,13 @@
/* You can override the default Infima variables here. */
:root {
--ifm-color-primary: #2e8555;
--ifm-color-primary-dark: #29784c;
--ifm-color-primary-darker: #277148;
--ifm-color-primary-darkest: #205d3b;
--ifm-color-primary-light: #33925d;
--ifm-color-primary-lighter: #359962;
--ifm-color-primary-lightest: #3cad6e;
--ifm-color-primary: #ff5630;
--ifm-color-primary-dark: #ad422a;
--ifm-color-primary-darker: #8f3521;
--ifm-color-primary-darkest: #592115;
--ifm-color-primary-light: #ff7152;
--ifm-color-primary-lighter: #ff9178;
--ifm-color-primary-lightest: #ffb09e;
--ifm-code-font-size: 95%;
--docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.1);
--code-block-diff-add-line-color: #ccffd8;
@@ -21,16 +21,17 @@
/* For readability concerns, you should choose a lighter palette in dark mode. */
[data-theme="dark"] {
--ifm-color-primary: #25c2a0;
--ifm-color-primary-dark: #21af90;
--ifm-color-primary-darker: #1fa588;
--ifm-color-primary-darkest: #1a8870;
--ifm-color-primary-light: #29d5b0;
--ifm-color-primary-lighter: #32d8b4;
--ifm-color-primary-lightest: #4fddbf;
--docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.3);
--code-block-diff-add-line-color: #216932;
--code-block-diff-remove-line-color: #8b423b;
--ifm-color-primary: #e64a19;
--ifm-color-primary-dark: #b73a12;
--ifm-color-primary-darker: #8c2c0e;
--ifm-color-primary-darkest: #5a1e0a;
--ifm-color-primary-light: #eb6d45;
--ifm-color-primary-lighter: #f09178;
--ifm-color-primary-lightest: #f5b5a6;
--ifm-code-font-size: 95%;
--docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.25);
--code-block-diff-add-line-color: #2d5a2c;
--code-block-diff-remove-line-color: #5a2d2c;
}
.code-block-diff-add-line {

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.0 KiB

17
go.mod
View File

@@ -1,15 +1,14 @@
module github.com/TecharoHQ/anubis
go 1.24.2
go 1.24
require (
github.com/a-h/templ v0.3.857
github.com/facebookgo/flagenv v0.0.0-20160425205200-fcd59fca7456
github.com/golang-jwt/jwt/v5 v5.2.2
github.com/playwright-community/playwright-go v0.5001.0
github.com/prometheus/client_golang v1.21.1
github.com/playwright-community/playwright-go v0.5101.0
github.com/prometheus/client_golang v1.22.0
github.com/sebest/xff v0.0.0-20210106013422-671bd2870b3a
github.com/tetratelabs/wazero v1.9.0
github.com/yl2chen/cidranger v1.0.2
golang.org/x/net v0.39.0
)
@@ -17,7 +16,6 @@ require (
require (
github.com/BurntSushi/toml v1.4.1-0.20240526193622-a339e1f7089c // indirect
github.com/a-h/parse v0.0.0-20250122154542-74294addb73e // indirect
github.com/aclements/go-moremath v0.0.0-20210112150236-f10218a38794 // indirect
github.com/andybalholm/brotli v1.1.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
@@ -32,7 +30,6 @@ require (
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/go-jose/go-jose/v3 v3.0.4 // indirect
github.com/go-stack/stack v1.8.1 // indirect
github.com/klauspost/compress v1.17.11 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
@@ -43,17 +40,19 @@ require (
github.com/prometheus/procfs v0.15.1 // indirect
golang.org/x/exp/typeparams v0.0.0-20231108232855-2478ac86f678 // indirect
golang.org/x/mod v0.24.0 // indirect
golang.org/x/perf v0.0.0-20250408013232-71ba5bc8ccce // indirect
golang.org/x/sync v0.13.0 // indirect
golang.org/x/sys v0.32.0 // indirect
golang.org/x/tools v0.32.0 // indirect
google.golang.org/protobuf v1.36.4 // indirect
google.golang.org/protobuf v1.36.5 // indirect
honnef.co/go/tools v0.6.1 // indirect
k8s.io/apimachinery v0.32.3 // indirect
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect
sigs.k8s.io/yaml v1.4.0 // indirect
)
tool (
github.com/a-h/templ/cmd/templ
golang.org/x/perf/cmd/benchstat
golang.org/x/tools/cmd/goimports
golang.org/x/tools/cmd/stringer
honnef.co/go/tools/cmd/staticcheck
)

36
go.sum
View File

@@ -4,8 +4,6 @@ github.com/a-h/parse v0.0.0-20250122154542-74294addb73e h1:HjVbSQHy+dnlS6C3XajZ6
github.com/a-h/parse v0.0.0-20250122154542-74294addb73e/go.mod h1:3mnrkvGpurZ4ZrTDbYU84xhwXW2TjTKShSwjRi2ihfQ=
github.com/a-h/templ v0.3.857 h1:6EqcJuGZW4OL+2iZ3MD+NnIcG7nGkaQeF2Zq5kf9ZGg=
github.com/a-h/templ v0.3.857/go.mod h1:qhrhAkRFubE7khxLZHsBFHfX+gWwVNKbzKeF9GlPV4M=
github.com/aclements/go-moremath v0.0.0-20210112150236-f10218a38794 h1:xlwdaKcTNVW4PtpQb8aKA4Pjy0CdJHEqvFbAnvR5m2g=
github.com/aclements/go-moremath v0.0.0-20210112150236-f10218a38794/go.mod h1:7e+I0LQFUI9AXWxOfsQROs9xPhoJtbsyWcjJqDd4KPY=
github.com/andybalholm/brotli v1.1.0 h1:eLKJA0d02Lf0mVpIDgYnqXcUn0GqVmEFny3VuID1U3M=
github.com/andybalholm/brotli v1.1.0/go.mod h1:sms7XGricyQI9K10gOSf56VKKWS4oLer58Q+mhRPtnY=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
@@ -40,10 +38,10 @@ github.com/go-stack/stack v1.8.1/go.mod h1:dcoOX6HbPZSZptuspn9bctJ+N/CnF5gGygcUP
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=
github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
@@ -57,13 +55,13 @@ github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/natefinch/atomic v1.0.1 h1:ZPYKxkqQOx3KZ+RsbnP/YsgvxWQPGxjC0oBt2AhwV0A=
github.com/natefinch/atomic v1.0.1/go.mod h1:N/D/ELrljoqDyT3rZrsUmtsuzvHkeB/wWjHV22AZRbM=
github.com/playwright-community/playwright-go v0.5001.0 h1:EY3oB+rU9cUp6CLHguWE8VMZTwAg+83Yyb7dQqEmGLg=
github.com/playwright-community/playwright-go v0.5001.0/go.mod h1:kBNWs/w2aJ2ZUp1wEOOFLXgOqvppFngM5OS+qyhl+ZM=
github.com/playwright-community/playwright-go v0.5101.0 h1:gVCMZThDO76LJ/aCI27lpB8hEAWhZszeS0YB+oTxJp0=
github.com/playwright-community/playwright-go v0.5101.0/go.mod h1:kBNWs/w2aJ2ZUp1wEOOFLXgOqvppFngM5OS+qyhl+ZM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.21.1 h1:DOvXXTqVzvkIewV/CDPFdejpMCGeMcbGCQ8YOmu+Ibk=
github.com/prometheus/client_golang v1.21.1/go.mod h1:U9NM32ykUErtVBxdvD3zfi+EuFkkaBvMb09mIfe0Zgg=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
@@ -77,8 +75,6 @@ github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81P
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tetratelabs/wazero v1.9.0 h1:IcZ56OuxrtaEz8UYNRHBrUa9bYeX9oVY93KspZZBf/I=
github.com/tetratelabs/wazero v1.9.0/go.mod h1:TSbcXCfFP0L2FGkRPxHphadXPjo1T6W+CseNNY7EkjM=
github.com/yl2chen/cidranger v1.0.2 h1:lbOWZVCG1tCRX4u24kuM1Tb4nHqWkDxwLdoS+SevawU=
github.com/yl2chen/cidranger v1.0.2/go.mod h1:9U1yz7WPYDwf0vpNWFaeRh0bjwz5RVgRy/9UEQfHl0g=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
@@ -96,12 +92,8 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/net v0.39.0 h1:ZCu7HMWDxpXpaiKdhzIfaltL9Lp31x/3fCP11bc6/fY=
golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E=
golang.org/x/perf v0.0.0-20250408013232-71ba5bc8ccce h1:KAIyikguO7lID+oSo3Dnut9RawUS+RWK8Ejj9KPvwU4=
golang.org/x/perf v0.0.0-20250408013232-71ba5bc8ccce/go.mod h1:tAdCL3nMN92yGFHY2TrzbGPP0q+LaOFewlib1WPJdpA=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -119,8 +111,6 @@ golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.32.0 h1:s77OFDvIQeibCmezSnk/q6iAfkdiQaJi4VzroCFrN20=
golang.org/x/sys v0.32.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
@@ -143,8 +133,8 @@ golang.org/x/tools v0.31.0/go.mod h1:naFTU+Cev749tSJRXJlna0T3WxKvb1kWEx15xA4SdmQ
golang.org/x/tools v0.32.0 h1:Q7N1vhpkQv7ybVzLFtTjvQya2ewbwNDZzUgfXGqtMWU=
golang.org/x/tools v0.32.0/go.mod h1:ZxrU41P/wAbZD8EDa6dDCa6XfpkhJ7HFMjHJXfBDu8s=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.36.4 h1:6A3ZDJHn/eNqc1i+IdefRzy/9PokBTPvcqMySR7NNIM=
google.golang.org/protobuf v1.36.4/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/protobuf v1.36.5 h1:tPhr+woSbjfYvY6/GPufUoYizxw1cF/yFoxJ2fmpwlM=
google.golang.org/protobuf v1.36.5/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
@@ -152,3 +142,9 @@ gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI=
honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4=
k8s.io/apimachinery v0.32.3 h1:JmDuDarhDmA/Li7j3aPrwhpNBA94Nvk5zLeOge9HH1U=
k8s.io/apimachinery v0.32.3/go.mod h1:GpHVgxoKlTxClKcteaeuF1Ul/lDVb74KpZcxcmLDElE=
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8=
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=

View File

@@ -1,15 +1,29 @@
package internal
import (
"errors"
"fmt"
"log/slog"
"net"
"net/http"
"net/netip"
"strings"
"github.com/TecharoHQ/anubis"
"github.com/sebest/xff"
)
// TODO: move into config
type XFFComputePreferences struct {
StripPrivate bool
StripLoopback bool
StripCGNAT bool
StripLLU bool
Flatten bool
}
var CGNat = netip.MustParsePrefix("100.64.0.0/10")
// UnchangingCache sets the Cache-Control header to cache a response for 1 year if
// and only if the application is compiled in "release" mode by Docker.
func UnchangingCache(next http.Handler) http.Handler {
@@ -65,6 +79,112 @@ func XForwardedForToXRealIP(next http.Handler) http.Handler {
})
}
// XForwardedForUpdate sets or updates the X-Forwarded-For header, adding
// the known remote address to an existing chain if present
func XForwardedForUpdate(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
defer next.ServeHTTP(w, r)
pref := XFFComputePreferences{
StripPrivate: true,
StripLoopback: true,
StripCGNAT: true,
Flatten: true,
StripLLU: true,
}
remoteAddr := r.RemoteAddr
origXFFHeader := r.Header.Get("X-Forwarded-For")
if remoteAddr == "@" {
// remote is a unix socket
// do not touch chain
return
}
xffHeaderString, err := computeXFFHeader(remoteAddr, origXFFHeader, pref)
if err != nil {
slog.Debug("computing X-Forwarded-For header failed", "err", err)
return
}
if len(xffHeaderString) == 0 {
r.Header.Del("X-Forwarded-For")
} else {
r.Header.Set("X-Forwarded-For", xffHeaderString)
}
})
}
var (
ErrCantSplitHostParse = errors.New("internal: unable to net.SplitHostParse")
ErrCantParseRemoteIP = errors.New("internal: unable to parse remote IP")
)
func computeXFFHeader(remoteAddr string, origXFFHeader string, pref XFFComputePreferences) (string, error) {
remoteIP, _, err := net.SplitHostPort(remoteAddr)
if err != nil {
return "", fmt.Errorf("%w: %w", ErrCantSplitHostParse, err)
}
parsedRemoteIP, err := netip.ParseAddr(remoteIP)
if err != nil {
return "", fmt.Errorf("%w: %w", ErrCantParseRemoteIP, err)
}
origForwardedList := make([]string, 0, 4)
if origXFFHeader != "" {
origForwardedList = strings.Split(origXFFHeader, ",")
}
origForwardedList = append(origForwardedList, parsedRemoteIP.String())
forwardedList := make([]string, 0, len(origForwardedList))
// this behavior is equivalent to
// ingress-nginx "compute-full-forwarded-for"
// https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#compute-full-forwarded-for
//
// this would be the correct place to strip and/or flatten this list
//
// strip - iterate backwards and eliminate configured trusted IPs
// flatten - only return the last element to avoid spoofing confusion
//
// many applications handle this in different ways, but
// generally they'd be expected to do these two things on
// their own end to find the first non-spoofed IP
for i := len(origForwardedList) - 1; i >= 0; i-- {
segmentIP, err := netip.ParseAddr(origForwardedList[i])
if err != nil {
// can't assess this element, so the remainder of the chain
// can't be trusted. not a fatal error, since anyone can
// spoof an XFF header
slog.Debug("failed to parse XFF segment", "err", err)
break
}
if pref.StripPrivate && segmentIP.IsPrivate() {
continue
}
if pref.StripLoopback && segmentIP.IsLoopback() {
continue
}
if pref.StripLLU && segmentIP.IsLinkLocalUnicast() {
continue
}
if pref.StripCGNAT && CGNat.Contains(segmentIP) {
continue
}
forwardedList = append([]string{segmentIP.String()}, forwardedList...)
}
var xffHeaderString string
if len(forwardedList) == 0 {
xffHeaderString = ""
return xffHeaderString, nil
}
if pref.Flatten {
xffHeaderString = forwardedList[len(forwardedList)-1]
} else {
xffHeaderString = strings.Join(forwardedList, ",")
}
return xffHeaderString, nil
}
// NoStoreCache sets the Cache-Control header to no-store for the response.
func NoStoreCache(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@@ -73,7 +193,7 @@ func NoStoreCache(next http.Handler) http.Handler {
})
}
// Do not allow browsing directory listings in paths that end with /
// NoBrowsing prevents directory browsing by returning a 404 for any request that ends with a "/".
func NoBrowsing(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if strings.HasSuffix(r.URL.Path, "/") {

View File

@@ -8,23 +8,26 @@ import (
)
// GetOGTags is the main function that retrieves Open Graph tags for a URL
func (c *OGTagCache) GetOGTags(url *url.URL) (map[string]string, error) {
func (c *OGTagCache) GetOGTags(url *url.URL, originalHost string) (map[string]string, error) {
if url == nil {
return nil, errors.New("nil URL provided, cannot fetch OG tags")
}
urlStr := c.getTarget(url)
target := c.getTarget(url)
cacheKey := c.generateCacheKey(target, originalHost)
// Check cache first
if cachedTags := c.checkCache(urlStr); cachedTags != nil {
if cachedTags := c.checkCache(cacheKey); cachedTags != nil {
return cachedTags, nil
}
// Fetch HTML content
doc, err := c.fetchHTMLDocument(urlStr)
// Fetch HTML content, passing the original host
doc, err := c.fetchHTMLDocumentWithCache(target, originalHost, cacheKey)
if errors.Is(err, syscall.ECONNREFUSED) {
slog.Debug("Connection refused, returning empty tags")
return nil, nil
} else if errors.Is(err, ErrNotFound) {
// not even worth a debug log...
} else if errors.Is(err, ErrOgHandled) {
// Error was handled in fetchHTMLDocument, return empty tags
return nil, nil
}
if err != nil {
@@ -35,17 +38,28 @@ func (c *OGTagCache) GetOGTags(url *url.URL) (map[string]string, error) {
ogTags := c.extractOGTags(doc)
// Store in cache
c.cache.Set(urlStr, ogTags, c.ogTimeToLive)
c.cache.Set(cacheKey, ogTags, c.ogTimeToLive)
return ogTags, nil
}
func (c *OGTagCache) generateCacheKey(target string, originalHost string) string {
var cacheKey string
if c.ogCacheConsiderHost {
cacheKey = target + "|" + originalHost
} else {
cacheKey = target
}
return cacheKey
}
// checkCache checks if we have the tags cached and returns them if so
func (c *OGTagCache) checkCache(urlStr string) map[string]string {
if cachedTags, ok := c.cache.Get(urlStr); ok {
func (c *OGTagCache) checkCache(cacheKey string) map[string]string {
if cachedTags, ok := c.cache.Get(cacheKey); ok {
slog.Debug("cache hit", "tags", cachedTags)
return cachedTags
}
slog.Debug("cache miss", "url", urlStr)
slog.Debug("cache miss", "url", cacheKey)
return nil
}

View File

@@ -4,12 +4,13 @@ import (
"net/http"
"net/http/httptest"
"net/url"
"reflect"
"testing"
"time"
)
func TestCheckCache(t *testing.T) {
cache := NewOGTagCache("http://example.com", true, time.Minute)
cache := NewOGTagCache("http://example.com", true, time.Minute, false)
// Set up test data
urlStr := "http://example.com/page"
@@ -17,18 +18,19 @@ func TestCheckCache(t *testing.T) {
"og:title": "Test Title",
"og:description": "Test Description",
}
cacheKey := cache.generateCacheKey(urlStr, "example.com")
// Test cache miss
tags := cache.checkCache(urlStr)
tags := cache.checkCache(cacheKey)
if tags != nil {
t.Errorf("expected nil tags on cache miss, got %v", tags)
}
// Manually add to cache
cache.cache.Set(urlStr, expectedTags, time.Minute)
cache.cache.Set(cacheKey, expectedTags, time.Minute)
// Test cache hit
tags = cache.checkCache(urlStr)
tags = cache.checkCache(cacheKey)
if tags == nil {
t.Fatal("expected non-nil tags on cache hit, got nil")
}
@@ -67,7 +69,7 @@ func TestGetOGTags(t *testing.T) {
defer ts.Close()
// Create an instance of OGTagCache with a short TTL for testing
cache := NewOGTagCache(ts.URL, true, 1*time.Minute)
cache := NewOGTagCache(ts.URL, true, 1*time.Minute, false)
// Parse the test server URL
parsedURL, err := url.Parse(ts.URL)
@@ -76,7 +78,8 @@ func TestGetOGTags(t *testing.T) {
}
// Test fetching OG tags from the test server
ogTags, err := cache.GetOGTags(parsedURL)
// Pass the host from the parsed test server URL
ogTags, err := cache.GetOGTags(parsedURL, parsedURL.Host)
if err != nil {
t.Fatalf("failed to get OG tags: %v", err)
}
@@ -95,13 +98,15 @@ func TestGetOGTags(t *testing.T) {
}
// Test fetching OG tags from the cache
ogTags, err = cache.GetOGTags(parsedURL)
// Pass the host from the parsed test server URL
ogTags, err = cache.GetOGTags(parsedURL, parsedURL.Host)
if err != nil {
t.Fatalf("failed to get OG tags from cache: %v", err)
}
// Test fetching OG tags from the cache (3rd time)
newOgTags, err := cache.GetOGTags(parsedURL)
// Pass the host from the parsed test server URL
newOgTags, err := cache.GetOGTags(parsedURL, parsedURL.Host)
if err != nil {
t.Fatalf("failed to get OG tags from cache: %v", err)
}
@@ -120,3 +125,116 @@ func TestGetOGTags(t *testing.T) {
}
}
// TestGetOGTagsWithHostConsideration tests the behavior of the cache with and without host consideration and for multiple hosts in a theoretical setup.
func TestGetOGTagsWithHostConsideration(t *testing.T) {
var loadCount int // Counter to track how many times the test route is loaded
// Create a test server
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
loadCount++ // Increment counter on each request to the server
w.Header().Set("Content-Type", "text/html")
w.Write([]byte(`
<!DOCTYPE html>
<html>
<head>
<meta property="og:title" content="Test Title" />
<meta property="og:description" content="Test Description" />
</head>
<body><p>Content</p></body>
</html>
`))
}))
defer ts.Close()
parsedURL, err := url.Parse(ts.URL)
if err != nil {
t.Fatalf("failed to parse test server URL: %v", err)
}
expectedTags := map[string]string{
"og:title": "Test Title",
"og:description": "Test Description",
}
testCases := []struct {
name string
ogCacheConsiderHost bool
requests []struct {
host string
expectedLoadCount int // Expected load count *after* this request
}
}{
{
name: "Host Not Considered - Same Host",
ogCacheConsiderHost: false,
requests: []struct {
host string
expectedLoadCount int
}{
{"host1", 1}, // First request, miss
{"host1", 1}, // Second request, same host, hit (host ignored)
},
},
{
name: "Host Not Considered - Different Host",
ogCacheConsiderHost: false,
requests: []struct {
host string
expectedLoadCount int
}{
{"host1", 1}, // First request, miss
{"host2", 1}, // Second request, different host, hit (host ignored)
},
},
{
name: "Host Considered - Same Host",
ogCacheConsiderHost: true,
requests: []struct {
host string
expectedLoadCount int
}{
{"host1", 1}, // First request, miss
{"host1", 1}, // Second request, same host, hit
},
},
{
name: "Host Considered - Different Host",
ogCacheConsiderHost: true,
requests: []struct {
host string
expectedLoadCount int
}{
{"host1", 1}, // First request, miss
{"host2", 2}, // Second request, different host, miss
{"host2", 2}, // Third request, same as second, hit
{"host1", 2}, // Fourth request, same as first, hit
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
loadCount = 0 // Reset load count for each test case
cache := NewOGTagCache(ts.URL, true, 1*time.Minute, tc.ogCacheConsiderHost)
for i, req := range tc.requests {
ogTags, err := cache.GetOGTags(parsedURL, req.host)
if err != nil {
t.Errorf("Request %d (host: %s): unexpected error: %v", i+1, req.host, err)
continue // Skip further checks for this request if error occurred
}
// Verify tags are correct (should always be the same in this setup)
if !reflect.DeepEqual(ogTags, expectedTags) {
t.Errorf("Request %d (host: %s): expected tags %v, got %v", i+1, req.host, expectedTags, ogTags)
}
// Verify the load count to check cache hit/miss behavior
if loadCount != req.expectedLoadCount {
t.Errorf("Request %d (host: %s): expected load count %d, got %d (cache hit/miss mismatch)", i+1, req.host, req.expectedLoadCount, loadCount)
}
}
})
}
}

View File

@@ -1,9 +1,11 @@
package ogtags
import (
"context"
"errors"
"fmt"
"golang.org/x/net/html"
"io"
"log/slog"
"mime"
"net"
@@ -11,57 +13,79 @@ import (
)
var (
ErrNotFound = errors.New("page not found") /*todo: refactor into common errors lib? */
emptyMap = map[string]string{} // used to indicate an empty result in the cache. Can't use nil as it would be a cache miss.
ErrOgHandled = errors.New("og: handled error") // used to indicate that the error was handled and should not be logged
emptyMap = map[string]string{} // used to indicate an empty result in the cache. Can't use nil as it would be a cache miss.
)
func (c *OGTagCache) fetchHTMLDocument(urlStr string) (*html.Node, error) {
resp, err := c.client.Get(urlStr)
// fetchHTMLDocumentWithCache fetches the HTML document from the given URL string,
// preserving the original host header.
func (c *OGTagCache) fetchHTMLDocumentWithCache(urlStr string, originalHost string, cacheKey string) (*html.Node, error) {
req, err := http.NewRequestWithContext(context.Background(), "GET", urlStr, nil)
if err != nil {
return nil, fmt.Errorf("failed to create http request: %w", err)
}
// Set the Host header to the original host
if originalHost != "" {
req.Host = originalHost
}
// Add proxy headers
req.Header.Set("X-Forwarded-Proto", "https")
req.Header.Set("User-Agent", "Anubis-OGTag-Fetcher/1.0") // For tracking purposes
// Send the request
resp, err := c.client.Do(req)
if err != nil {
var netErr net.Error
if errors.As(err, &netErr) && netErr.Timeout() {
slog.Debug("og: request timed out", "url", urlStr)
c.cache.Set(urlStr, emptyMap, c.ogTimeToLive/2) // Cache empty result for half the TTL to not spam the server
c.cache.Set(cacheKey, emptyMap, c.ogTimeToLive/2) // Cache empty result for half the TTL to not spam the server
}
return nil, fmt.Errorf("http get failed: %w", err)
}
// this defer will call MaxBytesReader's Close, which closes the original body.
defer resp.Body.Close()
// Ensure the response body is closed
defer func(Body io.ReadCloser) {
err := Body.Close()
if err != nil {
slog.Debug("og: error closing response body", "url", urlStr, "error", err)
}
}(resp.Body)
if resp.StatusCode != http.StatusOK {
slog.Debug("og: received non-OK status code", "url", urlStr, "status", resp.StatusCode)
c.cache.Set(urlStr, emptyMap, c.ogTimeToLive) // Cache empty result for non-successful status codes
return nil, ErrNotFound
c.cache.Set(cacheKey, emptyMap, c.ogTimeToLive) // Cache empty result for non-successful status codes
return nil, fmt.Errorf("%w: page not found", ErrOgHandled)
}
// Check content type
ct := resp.Header.Get("Content-Type")
if ct == "" {
// assume non html body
return nil, fmt.Errorf("missing Content-Type header")
} else {
mediaType, _, err := mime.ParseMediaType(ct)
if err != nil {
// Malformed Content-Type header
return nil, fmt.Errorf("invalid Content-Type '%s': %w", ct, err)
slog.Debug("og: malformed Content-Type header", "url", urlStr, "contentType", ct)
return nil, fmt.Errorf("%w malformed Content-Type header: %w", ErrOgHandled, err)
}
if mediaType != "text/html" && mediaType != "application/xhtml+xml" {
return nil, fmt.Errorf("unsupported Content-Type: %s", mediaType)
slog.Debug("og: unsupported Content-Type", "url", urlStr, "contentType", mediaType)
return nil, fmt.Errorf("%w unsupported Content-Type: %s", ErrOgHandled, mediaType)
}
}
resp.Body = http.MaxBytesReader(nil, resp.Body, c.maxContentLength)
resp.Body = http.MaxBytesReader(nil, resp.Body, maxContentLength)
doc, err := html.Parse(resp.Body)
if err != nil {
// Check if the error is specifically because the limit was exceeded
var maxBytesErr *http.MaxBytesError
if errors.As(err, &maxBytesErr) {
slog.Debug("og: content exceeded max length", "url", urlStr, "limit", c.maxContentLength)
return nil, fmt.Errorf("content too large: exceeded %d bytes", c.maxContentLength)
slog.Debug("og: content exceeded max length", "url", urlStr, "limit", maxContentLength)
return nil, fmt.Errorf("content too large: exceeded %d bytes", maxContentLength)
}
// parsing error (e.g., malformed HTML)
return nil, fmt.Errorf("failed to parse HTML: %w", err)
}

View File

@@ -2,6 +2,7 @@ package ogtags
import (
"fmt"
"golang.org/x/net/html"
"io"
"net/http"
"net/http/httptest"
@@ -78,8 +79,8 @@ func TestFetchHTMLDocument(t *testing.T) {
}))
defer ts.Close()
cache := NewOGTagCache("", true, time.Minute)
doc, err := cache.fetchHTMLDocument(ts.URL)
cache := NewOGTagCache("", true, time.Minute, false)
doc, err := cache.fetchHTMLDocument(ts.URL, "anything")
if tt.expectError {
if err == nil {
@@ -105,9 +106,9 @@ func TestFetchHTMLDocumentInvalidURL(t *testing.T) {
t.Skip("test requires theoretical network egress")
}
cache := NewOGTagCache("", true, time.Minute)
cache := NewOGTagCache("", true, time.Minute, false)
doc, err := cache.fetchHTMLDocument("http://invalid.url.that.doesnt.exist.example")
doc, err := cache.fetchHTMLDocument("http://invalid.url.that.doesnt.exist.example", "anything")
if err == nil {
t.Error("expected error for invalid URL, got nil")
@@ -117,3 +118,9 @@ func TestFetchHTMLDocumentInvalidURL(t *testing.T) {
t.Error("expected nil document for invalid URL, got non-nil")
}
}
// fetchHTMLDocument allows you to call fetchHTMLDocumentWithCache without a duplicate generateCacheKey call
func (c *OGTagCache) fetchHTMLDocument(urlStr string, originalHost string) (*html.Node, error) {
cacheKey := c.generateCacheKey(urlStr, originalHost)
return c.fetchHTMLDocumentWithCache(urlStr, originalHost, cacheKey)
}

View File

@@ -104,7 +104,7 @@ func TestIntegrationGetOGTags(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create cache instance
cache := NewOGTagCache(ts.URL, true, 1*time.Minute)
cache := NewOGTagCache(ts.URL, true, 1*time.Minute, false)
// Create URL for test
testURL, _ := url.Parse(ts.URL)
@@ -112,7 +112,8 @@ func TestIntegrationGetOGTags(t *testing.T) {
testURL.RawQuery = tc.query
// Get OG tags
ogTags, err := cache.GetOGTags(testURL)
// Pass the host from the test URL
ogTags, err := cache.GetOGTags(testURL, testURL.Host)
// Check error expectation
if tc.expectError {
@@ -139,7 +140,8 @@ func TestIntegrationGetOGTags(t *testing.T) {
}
// Test cache retrieval
cachedOGTags, err := cache.GetOGTags(testURL)
// Pass the host from the test URL
cachedOGTags, err := cache.GetOGTags(testURL, testURL.Host)
if err != nil {
t.Fatalf("failed to get OG tags from cache: %v", err)
}

View File

@@ -1,51 +1,111 @@
package ogtags
import (
"context"
"log/slog"
"net"
"net/http"
"net/url"
"strings"
"time"
"github.com/TecharoHQ/anubis/decaymap"
)
const (
maxContentLength = 16 << 20 // 16 MiB in bytes, if there is a reasonable reason that you need more than this...Why?
httpTimeout = 5 * time.Second /*todo: make this configurable?*/
)
type OGTagCache struct {
cache *decaymap.Impl[string, map[string]string]
target string
ogPassthrough bool
ogTimeToLive time.Duration
approvedTags []string
approvedPrefixes []string
client *http.Client
maxContentLength int64
cache *decaymap.Impl[string, map[string]string]
targetURL *url.URL
ogCacheConsiderHost bool
ogPassthrough bool
ogTimeToLive time.Duration
approvedTags []string
approvedPrefixes []string
client *http.Client
}
func NewOGTagCache(target string, ogPassthrough bool, ogTimeToLive time.Duration) *OGTagCache {
func NewOGTagCache(target string, ogPassthrough bool, ogTimeToLive time.Duration, ogTagsConsiderHost bool) *OGTagCache {
// Predefined approved tags and prefixes
// In the future, these could come from configuration
defaultApprovedTags := []string{"description", "keywords", "author"}
defaultApprovedPrefixes := []string{"og:", "twitter:", "fediverse:"}
client := &http.Client{
Timeout: 5 * time.Second, /*make this configurable?*/
var parsedTargetURL *url.URL
var err error
if target == "" {
// Default to localhost if target is empty
parsedTargetURL, _ = url.Parse("http://localhost")
} else {
parsedTargetURL, err = url.Parse(target)
if err != nil {
slog.Debug("og: failed to parse target URL, treating as non-unix", "target", target, "error", err)
// If parsing fails, treat it as a non-unix target for backward compatibility or default behavior
// For now, assume it's not a scheme issue but maybe an invalid char, etc.
// A simple string target might be intended if it's not a full URL.
parsedTargetURL = &url.URL{Scheme: "http", Host: target} // Assume http if scheme missing and host-like
if !strings.Contains(target, "://") && !strings.HasPrefix(target, "unix:") {
// If it looks like just a host/host:port (and not unix), prepend http:// (todo: is this bad...? Trace path to see if i can yell at user to do it right)
parsedTargetURL, _ = url.Parse("http://" + target) // fetch cares about scheme but anubis doesn't
}
}
}
const maxContentLength = 16 << 20 // 16 MiB in bytes
client := &http.Client{
Timeout: httpTimeout,
}
// Configure custom transport for Unix sockets
if parsedTargetURL.Scheme == "unix" {
socketPath := parsedTargetURL.Path // For unix scheme, path is the socket path
client.Transport = &http.Transport{
DialContext: func(_ context.Context, _, _ string) (net.Conn, error) {
return net.Dial("unix", socketPath)
},
}
}
return &OGTagCache{
cache: decaymap.New[string, map[string]string](),
target: target,
ogPassthrough: ogPassthrough,
ogTimeToLive: ogTimeToLive,
approvedTags: defaultApprovedTags,
approvedPrefixes: defaultApprovedPrefixes,
client: client,
maxContentLength: maxContentLength,
cache: decaymap.New[string, map[string]string](),
targetURL: parsedTargetURL, // Store the parsed URL
ogPassthrough: ogPassthrough,
ogTimeToLive: ogTimeToLive,
ogCacheConsiderHost: ogTagsConsiderHost, // todo: refactor to be a separate struct
approvedTags: defaultApprovedTags,
approvedPrefixes: defaultApprovedPrefixes,
client: client,
}
}
// getTarget constructs the target URL string for fetching OG tags.
// For Unix sockets, it creates a "fake" HTTP URL that the custom dialer understands.
func (c *OGTagCache) getTarget(u *url.URL) string {
return c.target + u.Path
if c.targetURL.Scheme == "unix" {
// The custom dialer ignores the host, but we need a valid http URL structure.
// Use "unix" as a placeholder host. Path and Query from original request are appended.
fakeURL := &url.URL{
Scheme: "http", // Scheme must be http/https for client.Get
Host: "unix", // Arbitrary host, ignored by custom dialer
Path: u.Path,
RawQuery: u.RawQuery,
}
return fakeURL.String()
}
// For regular http/https targets
target := *c.targetURL // Make a copy
target.Path = u.Path
target.RawQuery = u.RawQuery
return target.String()
}
func (c *OGTagCache) Cleanup() {
c.cache.Cleanup()
if c.cache != nil {
c.cache.Cleanup()
}
}

View File

@@ -1,7 +1,16 @@
package ogtags
import (
"context"
"errors"
"fmt"
"net"
"net/http"
"net/url"
"os"
"path/filepath"
"reflect"
"strings"
"testing"
"time"
)
@@ -29,14 +38,23 @@ func TestNewOGTagCache(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cache := NewOGTagCache(tt.target, tt.ogPassthrough, tt.ogTimeToLive)
cache := NewOGTagCache(tt.target, tt.ogPassthrough, tt.ogTimeToLive, false)
if cache == nil {
t.Fatal("expected non-nil cache, got nil")
}
if cache.target != tt.target {
t.Errorf("expected target %s, got %s", tt.target, cache.target)
// Check the parsed targetURL, handling the default case for empty target
expectedURLStr := tt.target
if tt.target == "" {
// Default behavior when target is empty is now http://localhost
expectedURLStr = "http://localhost"
} else if !strings.Contains(tt.target, "://") && !strings.HasPrefix(tt.target, "unix:") {
// Handle case where target is just host or host:port (and not unix)
expectedURLStr = "http://" + tt.target
}
if cache.targetURL.String() != expectedURLStr {
t.Errorf("expected targetURL %s, got %s", expectedURLStr, cache.targetURL.String())
}
if cache.ogPassthrough != tt.ogPassthrough {
@@ -50,6 +68,45 @@ func TestNewOGTagCache(t *testing.T) {
}
}
// TestNewOGTagCache_UnixSocket specifically tests unix socket initialization
func TestNewOGTagCache_UnixSocket(t *testing.T) {
tempDir := t.TempDir()
socketPath := filepath.Join(tempDir, "test.sock")
target := "unix://" + socketPath
cache := NewOGTagCache(target, true, 5*time.Minute, false)
if cache == nil {
t.Fatal("expected non-nil cache, got nil")
}
if cache.targetURL.Scheme != "unix" {
t.Errorf("expected targetURL scheme 'unix', got '%s'", cache.targetURL.Scheme)
}
if cache.targetURL.Path != socketPath {
t.Errorf("expected targetURL path '%s', got '%s'", socketPath, cache.targetURL.Path)
}
// Check if the client transport is configured for Unix sockets
transport, ok := cache.client.Transport.(*http.Transport)
if !ok {
t.Fatalf("expected client transport to be *http.Transport, got %T", cache.client.Transport)
}
if transport.DialContext == nil {
t.Fatal("expected client transport DialContext to be non-nil for unix socket")
}
// Attempt a dummy dial to see if it uses the correct path (optional, more involved check)
dummyConn, err := transport.DialContext(context.Background(), "", "")
if err == nil {
dummyConn.Close()
t.Log("DialContext seems functional, but couldn't verify path without a listener")
} else if !strings.Contains(err.Error(), "connect: connection refused") && !strings.Contains(err.Error(), "connect: no such file or directory") {
// We expect connection refused or not found if nothing is listening
t.Errorf("DialContext failed with unexpected error: %v", err)
}
}
func TestGetTarget(t *testing.T) {
tests := []struct {
name string
@@ -66,24 +123,39 @@ func TestGetTarget(t *testing.T) {
expected: "http://example.com",
},
{
name: "With complex path",
target: "http://example.com",
path: "/pag(#*((#@)ΓΓΓΓe/Γ",
query: "id=123",
expected: "http://example.com/pag(#*((#@)ΓΓΓΓe/Γ",
name: "With complex path",
target: "http://example.com",
path: "/pag(#*((#@)ΓΓΓΓe/Γ",
query: "id=123",
// Expect URL encoding and query parameter
expected: "http://example.com/pag%28%23%2A%28%28%23@%29%CE%93%CE%93%CE%93%CE%93e/%CE%93?id=123",
},
{
name: "With query and path",
target: "http://example.com",
path: "/page",
query: "id=123",
expected: "http://example.com/page",
expected: "http://example.com/page?id=123",
},
{
name: "Unix socket target",
target: "unix:/tmp/anubis.sock",
path: "/some/path",
query: "key=value&flag=true",
expected: "http://unix/some/path?key=value&flag=true", // Scheme becomes http, host is 'unix'
},
{
name: "Unix socket target with ///",
target: "unix:///var/run/anubis.sock",
path: "/",
query: "",
expected: "http://unix/",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cache := NewOGTagCache(tt.target, false, time.Minute)
cache := NewOGTagCache(tt.target, false, time.Minute, false)
u := &url.URL{
Path: tt.path,
@@ -98,3 +170,86 @@ func TestGetTarget(t *testing.T) {
})
}
}
// TestIntegrationGetOGTags_UnixSocket tests fetching OG tags via a Unix socket.
func TestIntegrationGetOGTags_UnixSocket(t *testing.T) {
tempDir := t.TempDir()
socketPath := filepath.Join(tempDir, "anubis-test.sock")
// Ensure the socket does not exist initially
_ = os.Remove(socketPath)
// Create a simple HTTP server listening on the Unix socket
listener, err := net.Listen("unix", socketPath)
if err != nil {
t.Fatalf("Failed to listen on unix socket %s: %v", socketPath, err)
}
defer func(listener net.Listener, socketPath string) {
if listener != nil {
if err := listener.Close(); err != nil && !errors.Is(err, net.ErrClosed) {
t.Logf("Error closing listener: %v", err)
}
}
if _, err := os.Stat(socketPath); err == nil {
if err := os.Remove(socketPath); err != nil {
t.Logf("Error removing socket file %s: %v", socketPath, err)
}
}
}(listener, socketPath)
server := &http.Server{
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/html")
fmt.Fprintln(w, `<!DOCTYPE html><html><head><meta property="og:title" content="Unix Socket Test" /></head><body>Test</body></html>`)
}),
}
go func() {
if err := server.Serve(listener); err != nil && !errors.Is(err, http.ErrServerClosed) {
t.Logf("Unix socket server error: %v", err)
}
}()
defer func(server *http.Server, ctx context.Context) {
err := server.Shutdown(ctx)
if err != nil {
t.Logf("Error shutting down server: %v", err)
}
}(server, context.Background()) // Ensure server is shut down
// Wait a moment for the server to start
time.Sleep(100 * time.Millisecond)
// Create cache instance pointing to the Unix socket
targetURL := "unix://" + socketPath
cache := NewOGTagCache(targetURL, true, 1*time.Minute, false)
// Create a dummy URL for the request (path and query matter)
testReqURL, _ := url.Parse("/some/page?query=1")
// Get OG tags
// Pass an empty string for host, as it's irrelevant for unix sockets
ogTags, err := cache.GetOGTags(testReqURL, "")
if err != nil {
t.Fatalf("GetOGTags failed for unix socket: %v", err)
}
expectedTags := map[string]string{
"og:title": "Unix Socket Test",
}
if !reflect.DeepEqual(ogTags, expectedTags) {
t.Errorf("Expected OG tags %v, got %v", expectedTags, ogTags)
}
// Test cache retrieval (should hit cache)
// Pass an empty string for host
cachedTags, err := cache.GetOGTags(testReqURL, "")
if err != nil {
t.Fatalf("GetOGTags (cache hit) failed for unix socket: %v", err)
}
if !reflect.DeepEqual(cachedTags, expectedTags) {
t.Errorf("Expected cached OG tags %v, got %v", expectedTags, cachedTags)
}
}

View File

@@ -12,7 +12,7 @@ import (
// TestExtractOGTags updated with correct expectations based on filtering logic
func TestExtractOGTags(t *testing.T) {
// Use a cache instance that reflects the default approved lists
testCache := NewOGTagCache("", false, time.Minute)
testCache := NewOGTagCache("", false, time.Minute, false)
// Manually set approved tags/prefixes based on the user request for clarity
testCache.approvedTags = []string{"description"}
testCache.approvedPrefixes = []string{"og:"}
@@ -189,7 +189,7 @@ func TestIsOGMetaTag(t *testing.T) {
func TestExtractMetaTagInfo(t *testing.T) {
// Use a cache instance that reflects the default approved lists
testCache := NewOGTagCache("", false, time.Minute)
testCache := NewOGTagCache("", false, time.Minute, false)
testCache.approvedTags = []string{"description"}
testCache.approvedPrefixes = []string{"og:"}

View File

@@ -3,6 +3,7 @@ package internal
import (
"fmt"
"log/slog"
"net/http"
"os"
)
@@ -22,3 +23,14 @@ func InitSlog(level string) {
})
slog.SetDefault(slog.New(h))
}
func GetRequestLogger(r *http.Request) *slog.Logger {
return slog.With(
"user_agent", r.UserAgent(),
"accept_language", r.Header.Get("Accept-Language"),
"priority", r.Header.Get("Priority"),
"x-forwarded-for",
r.Header.Get("X-Forwarded-For"),
"x-real-ip", r.Header.Get("X-Real-Ip"),
)
}

View File

@@ -25,7 +25,6 @@ import (
"os"
"os/exec"
"strconv"
"strings"
"testing"
"time"
@@ -53,6 +52,24 @@ var (
realIP: placeholderIP,
userAgent: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/120.0.6099.28 Safari/537.36",
},
{
name: "Amazonbot",
action: actionDeny,
realIP: placeholderIP,
userAgent: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/600.2.5 (KHTML, like Gecko) Version/8.0.2 Safari/600.2.5 (Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot)",
},
{
name: "Amazonbot",
action: actionDeny,
realIP: placeholderIP,
userAgent: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/600.2.5 (KHTML, like Gecko) Version/8.0.2 Safari/600.2.5 (Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot)",
},
{
name: "PerplexityAI",
action: actionDeny,
realIP: placeholderIP,
userAgent: "Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; PerplexityBot/1.0; +https://perplexity.ai/perplexitybot)",
},
{
name: "kagiBadIP",
action: actionChallenge,
@@ -81,7 +98,7 @@ const (
actionChallenge action = "CHALLENGE"
placeholderIP = "fd11:5ee:bad:c0de::"
playwrightVersion = "1.50.1"
playwrightVersion = "1.51.1"
)
type action string
@@ -102,6 +119,9 @@ func doesNPXExist(t *testing.T) {
}
func run(t *testing.T, command string) string {
if testing.Short() {
t.Skip("skipping integration smoke testing in short mode")
}
t.Helper()
shPath, err := exec.LookPath("sh")
@@ -245,6 +265,132 @@ func TestPlaywrightBrowser(t *testing.T) {
}
}
func TestPlaywrightWithBasePrefix(t *testing.T) {
if os.Getenv("DONT_USE_NETWORK") != "" {
t.Skip("test requires network egress")
return
}
t.Skip("NOTE(Xe)\\ these tests require HTTPS support in #364")
doesNPXExist(t)
startPlaywright(t)
pw := setupPlaywright(t)
basePrefix := "/myapp"
anubisURL := spawnAnubisWithOptions(t, basePrefix)
// Reset BasePrefix after test
t.Cleanup(func() {
anubis.BasePrefix = ""
})
browsers := []playwright.BrowserType{pw.Chromium}
for _, typ := range browsers {
t.Run(typ.Name()+"/basePrefix", func(t *testing.T) {
browser, err := typ.Connect(buildBrowserConnect(typ.Name()), playwright.BrowserTypeConnectOptions{
ExposeNetwork: playwright.String("<loopback>"),
})
if err != nil {
t.Fatalf("could not connect to remote browser: %v", err)
}
defer browser.Close()
ctx, err := browser.NewContext(playwright.BrowserNewContextOptions{
AcceptDownloads: playwright.Bool(false),
ExtraHttpHeaders: map[string]string{
"X-Real-Ip": "127.0.0.1",
},
UserAgent: playwright.String("Mozilla/5.0 (X11; Linux x86_64; rv:136.0) Gecko/20100101 Firefox/136.0"),
})
if err != nil {
t.Fatalf("could not create context: %v", err)
}
defer ctx.Close()
page, err := ctx.NewPage()
if err != nil {
t.Fatalf("could not create page: %v", err)
}
defer page.Close()
// Test accessing the base URL with prefix
_, err = page.Goto(anubisURL+basePrefix, playwright.PageGotoOptions{
Timeout: pwTimeout(testCases[0], time.Now().Add(5*time.Second)),
})
if err != nil {
pwFail(t, page, "could not navigate to test server with base prefix: %v", err)
}
// Check if challenge page is displayed
image := page.Locator("#image[src*=pensive], #image[src*=happy]")
err = image.WaitFor(playwright.LocatorWaitForOptions{
Timeout: pwTimeout(testCases[0], time.Now().Add(5*time.Second)),
})
if err != nil {
pwFail(t, page, "could not wait for challenge image: %v", err)
}
isVisible, err := image.IsVisible()
if err != nil {
pwFail(t, page, "could not check if challenge image is visible: %v", err)
}
if !isVisible {
pwFail(t, page, "challenge image not visible")
}
// Complete the challenge
// Wait for the challenge to be solved
anubisTest := page.Locator("#anubis-test")
err = anubisTest.WaitFor(playwright.LocatorWaitForOptions{
Timeout: pwTimeout(testCases[0], time.Now().Add(30*time.Second)),
})
if err != nil {
pwFail(t, page, "could not wait for challenge to be solved: %v", err)
}
// Verify the challenge was solved
content, err := anubisTest.TextContent(playwright.LocatorTextContentOptions{})
if err != nil {
pwFail(t, page, "could not get text content: %v", err)
}
var tm int64
if _, err := fmt.Sscanf(content, "%d", &tm); err != nil {
pwFail(t, page, "unexpected output: %s", content)
}
// Check if the timestamp is reasonable
now := time.Now().Unix()
if tm < now-60 || tm > now+60 {
pwFail(t, page, "unexpected timestamp in output: %d not in range %d±60", tm, now)
}
// Check if cookie has the correct path
cookies, err := ctx.Cookies()
if err != nil {
pwFail(t, page, "could not get cookies: %v", err)
}
var found bool
for _, cookie := range cookies {
if cookie.Name == anubis.CookieName {
found = true
if cookie.Path != basePrefix+"/" {
t.Errorf("cookie path is wrong, wanted %s, got: %s", basePrefix+"/", cookie.Path)
}
break
}
}
if !found {
t.Errorf("Cookie %q not found", anubis.CookieName)
}
})
}
}
func buildBrowserConnect(name string) string {
u, _ := url.Parse(*playwrightServer)
@@ -358,14 +504,14 @@ func pwFail(t *testing.T, page playwright.Page, format string, args ...any) erro
}
func pwTimeout(tc testCase, deadline time.Time) *float64 {
max := *playwrightMaxTime
maxTime := *playwrightMaxTime
if tc.isHard {
max = *playwrightMaxHardTime
maxTime = *playwrightMaxHardTime
}
d := time.Until(deadline)
if d <= 0 || d > max {
return playwright.Float(float64(max.Milliseconds()))
if d <= 0 || d > maxTime {
return playwright.Float(float64(maxTime.Milliseconds()))
}
return playwright.Float(float64(d.Milliseconds()))
}
@@ -379,7 +525,7 @@ func saveScreenshot(t *testing.T, page playwright.Page) {
return
}
f, err := os.CreateTemp("./var", "anubis-test-fail-"+strings.ReplaceAll(t.Name(), "/", "--")+"-*.png")
f, err := os.CreateTemp("", "anubis-test-fail-*.png")
if err != nil {
t.Logf("could not create temporary file: %v", err)
return
@@ -411,6 +557,10 @@ func setupPlaywright(t *testing.T) *playwright.Playwright {
}
func spawnAnubis(t *testing.T) string {
return spawnAnubisWithOptions(t, "")
}
func spawnAnubisWithOptions(t *testing.T, basePrefix string) string {
t.Helper()
h := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@@ -437,6 +587,7 @@ func spawnAnubis(t *testing.T) string {
Policy: policy,
ServeRobotsTXT: true,
Target: "http://" + host + ":" + port,
BasePrefix: basePrefix,
})
if err != nil {
t.Fatalf("can't construct libanubis.Server: %v", err)

166
internal/xff_test.go Normal file
View File

@@ -0,0 +1,166 @@
package internal
import (
"errors"
"net/http"
"net/http/httptest"
"testing"
)
func TestXForwardedForUpdateIgnoreUnix(t *testing.T) {
var remoteAddr = ""
var xff = ""
h := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
remoteAddr = r.RemoteAddr
xff = r.Header.Get("X-Forwarded-For")
w.WriteHeader(http.StatusOK)
})
r := httptest.NewRequest(http.MethodGet, "/", nil)
r.RemoteAddr = "@"
w := httptest.NewRecorder()
XForwardedForUpdate(h).ServeHTTP(w, r)
if r.RemoteAddr != remoteAddr {
t.Errorf("wanted remoteAddr to be %s, got: %s", r.RemoteAddr, remoteAddr)
}
if xff != "" {
t.Error("handler added X-Forwarded-For when it should not have")
}
}
func TestXForwardedForUpdateAddToChain(t *testing.T) {
var xff = ""
const expected = "1.1.1.1"
h := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
xff = r.Header.Get("X-Forwarded-For")
w.WriteHeader(http.StatusOK)
})
srv := httptest.NewServer(XForwardedForUpdate(h))
r, err := http.NewRequest(http.MethodGet, srv.URL, nil)
if err != nil {
t.Fatal(err)
}
r.Header.Set("X-Forwarded-For", "1.1.1.1,10.20.30.40")
if _, err := srv.Client().Do(r); err != nil {
t.Fatal(err)
}
if xff != expected {
t.Logf("expected: %s", expected)
t.Logf("got: %s", xff)
t.Error("X-Forwarded-For header was not what was expected")
}
}
func TestComputeXFFHeader(t *testing.T) {
for _, tt := range []struct {
name string
remoteAddr string
origXFFHeader string
pref XFFComputePreferences
result string
err error
}{
{
name: "StripPrivate",
remoteAddr: "127.0.0.1:80",
origXFFHeader: "1.1.1.1,10.0.0.1",
pref: XFFComputePreferences{
StripPrivate: true,
},
result: "1.1.1.1,127.0.0.1",
},
{
name: "StripLoopback",
remoteAddr: "127.0.0.1:80",
origXFFHeader: "1.1.1.1,10.0.0.1,127.0.0.1",
pref: XFFComputePreferences{
StripLoopback: true,
},
result: "1.1.1.1,10.0.0.1",
},
{
name: "StripCGNAT",
remoteAddr: "100.64.0.1:80",
origXFFHeader: "1.1.1.1,10.0.0.1,100.64.0.1",
pref: XFFComputePreferences{
StripCGNAT: true,
},
result: "1.1.1.1,10.0.0.1",
},
{
name: "StripLinkLocalUnicastIPv4",
remoteAddr: "169.254.0.1:80",
origXFFHeader: "1.1.1.1,10.0.0.1,169.254.0.1",
pref: XFFComputePreferences{
StripLLU: true,
},
result: "1.1.1.1,10.0.0.1",
},
{
name: "StripLinkLocalUnicastIPv6",
remoteAddr: "169.254.0.1:80",
origXFFHeader: "1.1.1.1,10.0.0.1,fe80::",
pref: XFFComputePreferences{
StripLLU: true,
},
result: "1.1.1.1,10.0.0.1",
},
{
name: "Flatten",
remoteAddr: "127.0.0.1:80",
origXFFHeader: "1.1.1.1,10.0.0.1,fe80::,100.64.0.1,169.254.0.1",
pref: XFFComputePreferences{
StripPrivate: true,
StripLoopback: true,
StripCGNAT: true,
StripLLU: true,
Flatten: true,
},
result: "1.1.1.1",
},
{
name: "invalid-ip-port",
remoteAddr: "fe80::",
err: ErrCantSplitHostParse,
},
{
name: "invalid-remote-ip",
remoteAddr: "anubis:80",
err: ErrCantParseRemoteIP,
},
{
name: "no-xff-dont-panic",
remoteAddr: "127.0.0.1:80",
pref: XFFComputePreferences{
StripPrivate: true,
StripLoopback: true,
StripCGNAT: true,
StripLLU: true,
Flatten: true,
},
},
} {
t.Run(tt.name, func(t *testing.T) {
result, err := computeXFFHeader(tt.remoteAddr, tt.origXFFHeader, tt.pref)
if err != nil && !errors.Is(err, tt.err) {
t.Errorf("computeXFFHeader got the wrong error, wanted %v but got: %v", tt.err, err)
}
if result != tt.result {
t.Errorf("computeXFFHeader returned the wrong result, wanted %q but got: %q", tt.result, result)
}
})
}
}

View File

@@ -1,43 +1,32 @@
package lib
import (
"context"
"crypto/ed25519"
"crypto/rand"
"crypto/sha256"
"crypto/subtle"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"io/fs"
"log"
"log/slog"
"math"
"net"
"net/http"
"os"
"path/filepath"
"net/url"
"slices"
"strconv"
"strings"
"time"
"github.com/a-h/templ"
"github.com/golang-jwt/jwt/v5"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/TecharoHQ/anubis"
"github.com/TecharoHQ/anubis/data"
"github.com/TecharoHQ/anubis/decaymap"
"github.com/TecharoHQ/anubis/internal"
"github.com/TecharoHQ/anubis/internal/dnsbl"
"github.com/TecharoHQ/anubis/internal/ogtags"
"github.com/TecharoHQ/anubis/lib/policy"
"github.com/TecharoHQ/anubis/lib/policy/config"
"github.com/TecharoHQ/anubis/wasm"
"github.com/TecharoHQ/anubis/web"
"github.com/TecharoHQ/anubis/xess"
)
var (
@@ -68,125 +57,6 @@ var (
})
)
type Options struct {
Next http.Handler
Policy *policy.ParsedConfig
ServeRobotsTXT bool
PrivateKey ed25519.PrivateKey
CookieDomain string
CookieName string
CookiePartitioned bool
OGPassthrough bool
OGTimeToLive time.Duration
Target string
WebmasterEmail string
}
func LoadPoliciesOrDefault(fname string, defaultDifficulty uint32) (*policy.ParsedConfig, error) {
var fin io.ReadCloser
var err error
if fname != "" {
fin, err = os.Open(fname)
if err != nil {
return nil, fmt.Errorf("can't parse policy file %s: %w", fname, err)
}
} else {
fname = "(data)/botPolicies.json"
fin, err = data.BotPolicies.Open("botPolicies.json")
if err != nil {
return nil, fmt.Errorf("[unexpected] can't parse builtin policy file %s: %w", fname, err)
}
}
defer fin.Close()
anubisPolicy, err := policy.ParseConfig(fin, fname, defaultDifficulty)
return anubisPolicy, err
}
func New(opts Options) (*Server, error) {
if opts.PrivateKey == nil {
slog.Debug("opts.PrivateKey not set, generating a new one")
_, priv, err := ed25519.GenerateKey(rand.Reader)
if err != nil {
return nil, fmt.Errorf("lib: can't generate private key: %v", err)
}
opts.PrivateKey = priv
}
result := &Server{
next: opts.Next,
priv: opts.PrivateKey,
pub: opts.PrivateKey.Public().(ed25519.PublicKey),
policy: opts.Policy,
opts: opts,
DNSBLCache: decaymap.New[string, dnsbl.DroneBLResponse](),
OGTags: ogtags.NewOGTagCache(opts.Target, opts.OGPassthrough, opts.OGTimeToLive),
validators: map[string]Verifier{
"fast": VerifierFunc(BasicSHA256Verify),
"slow": VerifierFunc(BasicSHA256Verify),
},
}
finfos, err := fs.ReadDir(web.Static, "static/wasm")
if err != nil {
return nil, fmt.Errorf("[unexpected] can't read any webassembly files in the static folder: %w", err)
}
for _, finfo := range finfos {
fin, err := web.Static.Open("static/wasm/" + finfo.Name())
if err != nil {
return nil, fmt.Errorf("[unexpected] can't read static/wasm/%s: %w", finfo.Name(), err)
}
defer fin.Close()
name := strings.TrimSuffix(finfo.Name(), filepath.Ext(finfo.Name()))
runner, err := wasm.NewRunner(context.Background(), finfo.Name(), fin)
if err != nil {
return nil, fmt.Errorf("can't load static/wasm/%s: %w", finfo.Name(), err)
}
var concurrentLimit int64 = 4
cv := NewConcurrentVerifier(runner, concurrentLimit)
result.validators[name] = cv
}
mux := http.NewServeMux()
xess.Mount(mux)
mux.Handle(anubis.StaticPath, internal.UnchangingCache(internal.NoBrowsing(http.StripPrefix(anubis.StaticPath, http.FileServerFS(web.Static)))))
if opts.ServeRobotsTXT {
mux.HandleFunc("/robots.txt", func(w http.ResponseWriter, r *http.Request) {
http.ServeFileFS(w, r, web.Static, "static/robots.txt")
})
mux.HandleFunc("/.well-known/robots.txt", func(w http.ResponseWriter, r *http.Request) {
http.ServeFileFS(w, r, web.Static, "static/robots.txt")
})
}
// mux.HandleFunc("GET /.within.website/x/cmd/anubis/static/js/main.mjs", serveMainJSWithBestEncoding)
mux.HandleFunc("POST /.within.website/x/cmd/anubis/api/make-challenge", result.MakeChallenge)
mux.HandleFunc("GET /.within.website/x/cmd/anubis/api/pass-challenge", result.PassChallenge)
mux.HandleFunc("GET /.within.website/x/cmd/anubis/api/test-error", result.TestError)
mux.HandleFunc("/", result.MaybeReverseProxy)
result.mux = mux
return result, nil
}
type Server struct {
mux *http.ServeMux
next http.Handler
@@ -196,15 +66,9 @@ type Server struct {
opts Options
DNSBLCache *decaymap.Impl[string, dnsbl.DroneBLResponse]
OGTags *ogtags.OGTagCache
validators map[string]Verifier
}
func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
s.mux.ServeHTTP(w, r)
}
func (s *Server) challengeFor(r *http.Request, difficulty uint32) string {
func (s *Server) challengeFor(r *http.Request, difficulty int) string {
fp := sha256.Sum256(s.priv.Seed())
challengeData := fmt.Sprintf(
@@ -219,30 +83,111 @@ func (s *Server) challengeFor(r *http.Request, difficulty uint32) string {
return internal.SHA256sum(challengeData)
}
func (s *Server) MaybeReverseProxy(w http.ResponseWriter, r *http.Request) {
lg := slog.With(
"user_agent", r.UserAgent(),
"accept_language", r.Header.Get("Accept-Language"),
"priority", r.Header.Get("Priority"),
"x-forwarded-for",
r.Header.Get("X-Forwarded-For"),
"x-real-ip", r.Header.Get("X-Real-Ip"),
)
func (s *Server) maybeReverseProxyHttpStatusOnly(w http.ResponseWriter, r *http.Request) {
s.maybeReverseProxy(w, r, true)
}
func (s *Server) maybeReverseProxyOrPage(w http.ResponseWriter, r *http.Request) {
s.maybeReverseProxy(w, r, false)
}
func (s *Server) maybeReverseProxy(w http.ResponseWriter, r *http.Request, httpStatusOnly bool) {
lg := internal.GetRequestLogger(r)
cr, rule, err := s.check(r)
if err != nil {
lg.Error("check failed", "err", err)
templ.Handler(web.Base("Oh noes!", web.ErrorPage("Internal Server Error: administrator has misconfigured Anubis. Please contact the administrator and ask them to look for the logs around \"maybeReverseProxy\"", s.opts.WebmasterEmail)), templ.WithStatus(http.StatusInternalServerError)).ServeHTTP(w, r)
s.respondWithError(w, r, "Internal Server Error: administrator has misconfigured Anubis. Please contact the administrator and ask them to look for the logs around \"maybeReverseProxy\"")
return
}
r.Header.Add("X-Anubis-Rule", cr.Name)
r.Header.Add("X-Anubis-Action", string(cr.Rule))
lg = lg.With("check_result", cr)
policy.PolicyApplications.WithLabelValues(cr.Name, string(cr.Rule)).Add(1)
policy.Applications.WithLabelValues(cr.Name, string(cr.Rule)).Add(1)
ip := r.Header.Get("X-Real-Ip")
if s.handleDNSBL(w, r, ip, lg) {
return
}
if s.checkRules(w, r, cr, lg, rule) {
return
}
ckie, err := r.Cookie(anubis.CookieName)
if err != nil {
lg.Debug("cookie not found", "path", r.URL.Path)
s.ClearCookie(w)
s.RenderIndex(w, r, rule, httpStatusOnly)
return
}
if err := ckie.Valid(); err != nil {
lg.Debug("cookie is invalid", "err", err)
s.ClearCookie(w)
s.RenderIndex(w, r, rule, httpStatusOnly)
return
}
if time.Now().After(ckie.Expires) && !ckie.Expires.IsZero() {
lg.Debug("cookie expired", "path", r.URL.Path)
s.ClearCookie(w)
s.RenderIndex(w, r, rule, httpStatusOnly)
return
}
token, err := jwt.ParseWithClaims(ckie.Value, jwt.MapClaims{}, func(token *jwt.Token) (interface{}, error) {
return s.pub, nil
}, jwt.WithExpirationRequired(), jwt.WithStrictDecoding())
if err != nil || !token.Valid {
lg.Debug("invalid token", "path", r.URL.Path, "err", err)
s.ClearCookie(w)
s.RenderIndex(w, r, rule, httpStatusOnly)
return
}
r.Header.Add("X-Anubis-Status", "PASS")
s.ServeHTTPNext(w, r)
}
func (s *Server) checkRules(w http.ResponseWriter, r *http.Request, cr policy.CheckResult, lg *slog.Logger, rule *policy.Bot) bool {
switch cr.Rule {
case config.RuleAllow:
lg.Debug("allowing traffic to origin (explicit)")
s.ServeHTTPNext(w, r)
return true
case config.RuleDeny:
s.ClearCookie(w)
lg.Info("explicit deny")
if rule == nil {
lg.Error("rule is nil, cannot calculate checksum")
s.respondWithError(w, r, "Internal Server Error: Please contact the administrator and ask them to look for the logs around \"maybeReverseProxy.RuleDeny\"")
return true
}
hash := rule.Hash()
lg.Debug("rule hash", "hash", hash)
s.respondWithStatus(w, r, fmt.Sprintf("Access Denied: error code %s", hash), s.policy.StatusCodes.Deny)
return true
case config.RuleChallenge:
lg.Debug("challenge requested")
case config.RuleBenchmark:
lg.Debug("serving benchmark page")
s.RenderBench(w, r)
return true
default:
s.ClearCookie(w)
slog.Error("CONFIG ERROR: unknown rule", "rule", cr.Rule)
s.respondWithError(w, r, "Internal Server Error: administrator has misconfigured Anubis. Please contact the administrator and ask them to look for the logs around \"maybeReverseProxy.Rules\"")
return true
}
return false
}
func (s *Server) handleDNSBL(w http.ResponseWriter, r *http.Request, ip string, lg *slog.Logger) bool {
if s.policy.DNSBL && ip != "" {
resp, ok := s.DNSBLCache.Get(ip)
if !ok {
@@ -257,197 +202,77 @@ func (s *Server) MaybeReverseProxy(w http.ResponseWriter, r *http.Request) {
if resp != dnsbl.AllGood {
lg.Info("DNSBL hit", "status", resp.String())
templ.Handler(web.Base("Oh noes!", web.ErrorPage(fmt.Sprintf("DroneBL reported an entry: %s, see https://dronebl.org/lookup?ip=%s", resp.String(), ip), s.opts.WebmasterEmail)), templ.WithStatus(http.StatusOK)).ServeHTTP(w, r)
return
s.respondWithStatus(w, r, fmt.Sprintf("DroneBL reported an entry: %s, see https://dronebl.org/lookup?ip=%s", resp.String(), ip), s.policy.StatusCodes.Deny)
return true
}
}
switch cr.Rule {
case config.RuleAllow:
lg.Debug("allowing traffic to origin (explicit)")
s.next.ServeHTTP(w, r)
return
case config.RuleDeny:
s.ClearCookie(w)
lg.Info("explicit deny")
if rule == nil {
lg.Error("rule is nil, cannot calculate checksum")
templ.Handler(web.Base("Oh noes!", web.ErrorPage("Other internal server error (contact the admin)", s.opts.WebmasterEmail)), templ.WithStatus(http.StatusInternalServerError)).ServeHTTP(w, r)
return
}
hash, err := rule.Hash()
if err != nil {
lg.Error("can't calculate checksum of rule", "err", err)
templ.Handler(web.Base("Oh noes!", web.ErrorPage("Other internal server error (contact the admin)", s.opts.WebmasterEmail)), templ.WithStatus(http.StatusInternalServerError)).ServeHTTP(w, r)
return
}
lg.Debug("rule hash", "hash", hash)
templ.Handler(web.Base("Oh noes!", web.ErrorPage(fmt.Sprintf("Access Denied: error code %s", hash), s.opts.WebmasterEmail)), templ.WithStatus(http.StatusOK)).ServeHTTP(w, r)
return
case config.RuleChallenge:
lg.Debug("challenge requested")
case config.RuleBenchmark:
lg.Debug("serving benchmark page")
s.RenderBench(w, r)
return
default:
s.ClearCookie(w)
templ.Handler(web.Base("Oh noes!", web.ErrorPage("Other internal server error (contact the admin)", s.opts.WebmasterEmail)), templ.WithStatus(http.StatusInternalServerError)).ServeHTTP(w, r)
return
}
ckie, err := r.Cookie(anubis.CookieName)
if err != nil {
lg.Debug("cookie not found", "path", r.URL.Path)
s.ClearCookie(w)
s.RenderIndex(w, r)
return
}
if err := ckie.Valid(); err != nil {
lg.Debug("cookie is invalid", "err", err)
s.ClearCookie(w)
s.RenderIndex(w, r)
return
}
if time.Now().After(ckie.Expires) && !ckie.Expires.IsZero() {
lg.Debug("cookie expired", "path", r.URL.Path)
s.ClearCookie(w)
s.RenderIndex(w, r)
return
}
token, err := jwt.ParseWithClaims(ckie.Value, jwt.MapClaims{}, func(token *jwt.Token) (interface{}, error) {
return s.pub, nil
}, jwt.WithExpirationRequired(), jwt.WithStrictDecoding())
if err != nil || !token.Valid {
lg.Debug("invalid token", "path", r.URL.Path, "err", err)
s.ClearCookie(w)
s.RenderIndex(w, r)
return
}
if randomJitter() {
r.Header.Add("X-Anubis-Status", "PASS-BRIEF")
lg.Debug("cookie is not enrolled into secondary screening")
s.next.ServeHTTP(w, r)
return
}
claims, ok := token.Claims.(jwt.MapClaims)
if !ok {
lg.Debug("invalid token claims type", "path", r.URL.Path)
s.ClearCookie(w)
s.RenderIndex(w, r)
return
}
challenge := s.challengeFor(r, rule.Challenge.Difficulty)
if claims["challenge"] != challenge {
lg.Debug("invalid challenge", "path", r.URL.Path)
s.ClearCookie(w)
s.RenderIndex(w, r)
return
}
var nonce int
if v, ok := claims["nonce"].(float64); ok {
nonce = int(v)
}
calcString := fmt.Sprintf("%s%d", challenge, nonce)
calculated := internal.SHA256sum(calcString)
if subtle.ConstantTimeCompare([]byte(claims["response"].(string)), []byte(calculated)) != 1 {
lg.Debug("invalid response", "path", r.URL.Path)
failedValidations.Inc()
s.ClearCookie(w)
s.RenderIndex(w, r)
return
}
slog.Debug("all checks passed")
r.Header.Add("X-Anubis-Status", "PASS-FULL")
s.next.ServeHTTP(w, r)
}
func (s *Server) RenderIndex(w http.ResponseWriter, r *http.Request) {
var ogTags map[string]string = nil
if s.opts.OGPassthrough {
var err error
ogTags, err = s.OGTags.GetOGTags(r.URL)
if err != nil {
slog.Error("failed to get OG tags", "err", err)
ogTags = nil
}
}
handler := internal.NoStoreCache(
templ.Handler(
web.BaseWithOGTags("Making sure you're not a bot!", web.Index(), ogTags),
),
)
handler.ServeHTTP(w, r)
}
func (s *Server) RenderBench(w http.ResponseWriter, r *http.Request) {
templ.Handler(
web.Base("Benchmarking Anubis!", web.Bench()),
).ServeHTTP(w, r)
return false
}
func (s *Server) MakeChallenge(w http.ResponseWriter, r *http.Request) {
lg := slog.With("user_agent", r.UserAgent(), "accept_language", r.Header.Get("Accept-Language"), "priority", r.Header.Get("Priority"), "x-forwarded-for", r.Header.Get("X-Forwarded-For"), "x-real-ip", r.Header.Get("X-Real-Ip"))
lg := internal.GetRequestLogger(r)
encoder := json.NewEncoder(w)
cr, rule, err := s.check(r)
if err != nil {
lg.Error("check failed", "err", err)
w.WriteHeader(http.StatusInternalServerError)
json.NewEncoder(w).Encode(struct {
err := encoder.Encode(struct {
Error string `json:"error"`
}{
Error: "Internal Server Error: administrator has misconfigured Anubis. Please contact the administrator and ask them to look for the logs around \"makeChallenge\"",
})
if err != nil {
lg.Error("failed to encode error response", "err", err)
w.WriteHeader(http.StatusInternalServerError)
}
return
}
lg = lg.With("check_result", cr)
challenge := s.challengeFor(r, rule.Challenge.Difficulty)
json.NewEncoder(w).Encode(struct {
err = encoder.Encode(struct {
Challenge string `json:"challenge"`
Rules *config.ChallengeRules `json:"rules"`
}{
Challenge: challenge,
Rules: rule.Challenge,
})
if err != nil {
lg.Error("failed to encode challenge", "err", err)
w.WriteHeader(http.StatusInternalServerError)
return
}
lg.Debug("made challenge", "challenge", challenge, "rules", rule.Challenge, "cr", cr)
challengesIssued.Inc()
}
func (s *Server) PassChallenge(w http.ResponseWriter, r *http.Request) {
lg := slog.With(
"user_agent", r.UserAgent(),
"accept_language", r.Header.Get("Accept-Language"),
"priority", r.Header.Get("Priority"),
"x-forwarded-for", r.Header.Get("X-Forwarded-For"),
"x-real-ip", r.Header.Get("X-Real-Ip"),
)
lg := internal.GetRequestLogger(r)
redir := r.FormValue("redir")
redirURL, err := url.ParseRequestURI(redir)
if err != nil {
lg.Error("invalid redirect", "err", err)
s.respondWithError(w, r, "Invalid redirect")
return
}
// used by the path checker rule
r.URL = redirURL
cr, rule, err := s.check(r)
if err != nil {
lg.Error("check failed", "err", err)
templ.Handler(web.Base("Oh noes!", web.ErrorPage("Internal Server Error: administrator has misconfigured Anubis. Please contact the administrator and ask them to look for the logs around \"passChallenge\".", s.opts.WebmasterEmail)), templ.WithStatus(http.StatusInternalServerError)).ServeHTTP(w, r)
s.respondWithError(w, r, "Internal Server Error: administrator has misconfigured Anubis. Please contact the administrator and ask them to look for the logs around \"passChallenge\".\"")
return
}
lg = lg.With("check_result", cr, "algorithm", rule.Challenge.Algorithm)
lg = lg.With("check_result", cr)
nonceStr := r.FormValue("nonce")
if nonceStr == "" {
s.ClearCookie(w)
lg.Debug("no nonce")
templ.Handler(web.Base("Oh noes!", web.ErrorPage("missing nonce", s.opts.WebmasterEmail)), templ.WithStatus(http.StatusInternalServerError)).ServeHTTP(w, r)
s.respondWithError(w, r, "missing nonce")
return
}
@@ -455,7 +280,7 @@ func (s *Server) PassChallenge(w http.ResponseWriter, r *http.Request) {
if elapsedTimeStr == "" {
s.ClearCookie(w)
lg.Debug("no elapsedTime")
templ.Handler(web.Base("Oh noes!", web.ErrorPage("missing elapsedTime", s.opts.WebmasterEmail)), templ.WithStatus(http.StatusInternalServerError)).ServeHTTP(w, r)
s.respondWithError(w, r, "missing elapsedTime")
return
}
@@ -463,7 +288,7 @@ func (s *Server) PassChallenge(w http.ResponseWriter, r *http.Request) {
if err != nil {
s.ClearCookie(w)
lg.Debug("elapsedTime doesn't parse", "err", err)
templ.Handler(web.Base("Oh noes!", web.ErrorPage("invalid elapsedTime", s.opts.WebmasterEmail)), templ.WithStatus(http.StatusInternalServerError)).ServeHTTP(w, r)
s.respondWithError(w, r, "invalid elapsedTime")
return
}
@@ -471,57 +296,51 @@ func (s *Server) PassChallenge(w http.ResponseWriter, r *http.Request) {
timeTaken.Observe(elapsedTime)
response := r.FormValue("response")
redir := r.FormValue("redir")
responseBytes, err := hex.DecodeString(response)
urlParsed, err := r.URL.Parse(redir)
if err != nil {
s.ClearCookie(w)
lg.Debug("response doesn't parse", "err", err)
templ.Handler(web.Base("Oh noes!", web.ErrorPage("invalid response format", s.opts.WebmasterEmail)), templ.WithStatus(http.StatusInternalServerError)).ServeHTTP(w, r)
s.respondWithError(w, r, "Redirect URL not parseable")
return
}
if (len(urlParsed.Host) > 0 && len(s.opts.RedirectDomains) != 0 && !slices.Contains(s.opts.RedirectDomains, urlParsed.Host)) || urlParsed.Host != r.URL.Host {
s.respondWithError(w, r, "Redirect domain not allowed")
return
}
challenge := s.challengeFor(r, rule.Challenge.Difficulty)
challengeBytes, err := hex.DecodeString(challenge)
if err != nil {
s.ClearCookie(w)
lg.Debug("challenge doesn't parse", "err", err)
templ.Handler(web.Base("Oh noes!", web.ErrorPage("invalid internal challenge format", s.opts.WebmasterEmail)), templ.WithStatus(http.StatusInternalServerError)).ServeHTTP(w, r)
return
}
nonceRaw, err := strconv.ParseUint(nonceStr, 10, 32)
nonce, err := strconv.Atoi(nonceStr)
if err != nil {
s.ClearCookie(w)
lg.Debug("nonce doesn't parse", "err", err)
templ.Handler(web.Base("Oh noes!", web.ErrorPage("invalid nonce", s.opts.WebmasterEmail)), templ.WithStatus(http.StatusInternalServerError)).ServeHTTP(w, r)
s.respondWithError(w, r, "invalid nonce")
return
}
nonce := uint32(nonceRaw)
validator, ok := s.validators[string(rule.Challenge.Algorithm)]
if !ok {
calcString := fmt.Sprintf("%s%d", challenge, nonce)
calculated := internal.SHA256sum(calcString)
if subtle.ConstantTimeCompare([]byte(response), []byte(calculated)) != 1 {
s.ClearCookie(w)
lg.Debug("no validator found for algorithm", "algorithm", rule.Challenge.Algorithm)
templ.Handler(web.Base("Oh noes!", web.ErrorPage(fmt.Sprintf("Internal anubis error has been detected and you cannot proceed. Tried to look up a validator for algorithm %s but wasn't able to find one. Please contact the administrator of this instance of anubis", rule.Challenge.Algorithm), s.opts.WebmasterEmail)), templ.WithStatus(http.StatusInternalServerError)).ServeHTTP(w, r)
lg.Debug("hash does not match", "got", response, "want", calculated)
s.respondWithStatus(w, r, "invalid response", http.StatusForbidden)
failedValidations.Inc()
return
}
ok, err = validator.Verify(r.Context(), challengeBytes, responseBytes, nonce, rule.Challenge.Difficulty)
if err != nil {
// compare the leading zeroes
if !strings.HasPrefix(response, strings.Repeat("0", rule.Challenge.Difficulty)) {
s.ClearCookie(w)
lg.Debug("verification error", "err", err)
templ.Handler(web.Base("Oh noes!", web.ErrorPage("Your challenge failed validation. Please go back and try your challenge again", s.opts.WebmasterEmail)), templ.WithStatus(http.StatusBadRequest)).ServeHTTP(w, r)
lg.Debug("difficulty check failed", "response", response, "difficulty", rule.Challenge.Difficulty)
s.respondWithStatus(w, r, "invalid response", http.StatusForbidden)
failedValidations.Inc()
return
}
if !ok {
s.ClearCookie(w)
lg.Debug("response invalid")
templ.Handler(web.Base("Oh noes!", web.ErrorPage("Your challenge failed validation. Please go back and try your challenge again", s.opts.WebmasterEmail)), templ.WithStatus(http.StatusBadRequest)).ServeHTTP(w, r)
return
// Adjust cookie path if base prefix is not empty
cookiePath := "/"
if anubis.BasePrefix != "" {
cookiePath = strings.TrimSuffix(anubis.BasePrefix, "/") + "/"
}
// generate JWT cookie
token := jwt.NewWithClaims(jwt.SigningMethodEdDSA, jwt.MapClaims{
"challenge": challenge,
@@ -529,24 +348,24 @@ func (s *Server) PassChallenge(w http.ResponseWriter, r *http.Request) {
"response": response,
"iat": time.Now().Unix(),
"nbf": time.Now().Add(-1 * time.Minute).Unix(),
"exp": time.Now().Add(24 * 7 * time.Hour).Unix(),
"exp": time.Now().Add(s.opts.CookieExpiration).Unix(),
})
tokenString, err := token.SignedString(s.priv)
if err != nil {
lg.Error("failed to sign JWT", "err", err)
s.ClearCookie(w)
templ.Handler(web.Base("Oh noes!", web.ErrorPage("failed to sign JWT", s.opts.WebmasterEmail)), templ.WithStatus(http.StatusInternalServerError)).ServeHTTP(w, r)
s.respondWithError(w, r, "failed to sign JWT")
return
}
http.SetCookie(w, &http.Cookie{
Name: anubis.CookieName,
Value: tokenString,
Expires: time.Now().Add(24 * 7 * time.Hour),
Expires: time.Now().Add(s.opts.CookieExpiration),
SameSite: http.SameSiteLaxMode,
Domain: s.opts.CookieDomain,
Partitioned: s.opts.CookiePartitioned,
Path: "/",
Path: cookiePath,
})
challengesValidated.Inc()
@@ -556,38 +375,36 @@ func (s *Server) PassChallenge(w http.ResponseWriter, r *http.Request) {
func (s *Server) TestError(w http.ResponseWriter, r *http.Request) {
err := r.FormValue("err")
templ.Handler(web.Base("Oh noes!", web.ErrorPage(err, s.opts.WebmasterEmail)), templ.WithStatus(http.StatusInternalServerError)).ServeHTTP(w, r)
s.respondWithError(w, r, err)
}
func cr(name string, rule config.Rule) policy.CheckResult {
return policy.CheckResult{
Name: name,
Rule: rule,
}
}
// Check evaluates the list of rules, and returns the result
func (s *Server) check(r *http.Request) (CheckResult, *policy.Bot, error) {
func (s *Server) check(r *http.Request) (policy.CheckResult, *policy.Bot, error) {
host := r.Header.Get("X-Real-Ip")
if host == "" {
return decaymap.Zilch[CheckResult](), nil, fmt.Errorf("[misconfiguration] X-Real-Ip header is not set")
return decaymap.Zilch[policy.CheckResult](), nil, fmt.Errorf("[misconfiguration] X-Real-Ip header is not set")
}
addr := net.ParseIP(host)
if addr == nil {
return decaymap.Zilch[CheckResult](), nil, fmt.Errorf("[misconfiguration] %q is not an IP address", host)
return decaymap.Zilch[policy.CheckResult](), nil, fmt.Errorf("[misconfiguration] %q is not an IP address", host)
}
for _, b := range s.policy.Bots {
if b.UserAgent != nil {
if b.UserAgent.MatchString(r.UserAgent()) && s.checkRemoteAddress(b, addr) {
return cr("bot/"+b.Name, b.Action), &b, nil
}
match, err := b.Rules.Check(r)
if err != nil {
return decaymap.Zilch[policy.CheckResult](), nil, fmt.Errorf("can't run check %s: %w", b.Name, err)
}
if b.Path != nil {
if b.Path.MatchString(r.URL.Path) && s.checkRemoteAddress(b, addr) {
return cr("bot/"+b.Name, b.Action), &b, nil
}
}
if b.Ranger != nil {
if s.checkRemoteAddress(b, addr) {
return cr("bot/"+b.Name, b.Action), &b, nil
}
if match {
return cr("bot/"+b.Name, b.Action), &b, nil
}
}
@@ -600,19 +417,6 @@ func (s *Server) check(r *http.Request) (CheckResult, *policy.Bot, error) {
}, nil
}
func (s *Server) checkRemoteAddress(b policy.Bot, addr net.IP) bool {
if b.Ranger == nil {
return true
}
ok, err := b.Ranger.Contains(addr)
if err != nil {
log.Panicf("[unexpected] something very funky is going on, %q does not have a calculable network number: %v", addr.String(), err)
}
return ok
}
func (s *Server) CleanupDecayMap() {
s.DNSBLCache.Cleanup()
s.OGTags.Cleanup()

View File

@@ -5,9 +5,13 @@ import (
"fmt"
"net/http"
"net/http/httptest"
"os"
"strings"
"testing"
"time"
"github.com/TecharoHQ/anubis"
"github.com/TecharoHQ/anubis/data"
"github.com/TecharoHQ/anubis/internal"
"github.com/TecharoHQ/anubis/lib/policy"
)
@@ -15,12 +19,12 @@ import (
func loadPolicies(t *testing.T, fname string) *policy.ParsedConfig {
t.Helper()
policy, err := LoadPoliciesOrDefault("", anubis.DefaultDifficulty)
anubisPolicy, err := LoadPoliciesOrDefault(fname, anubis.DefaultDifficulty)
if err != nil {
t.Fatal(err)
}
return policy
return anubisPolicy
}
func spawnAnubis(t *testing.T, opts Options) *Server {
@@ -55,6 +59,22 @@ func makeChallenge(t *testing.T, ts *httptest.Server) challenge {
return chall
}
func TestLoadPolicies(t *testing.T) {
for _, fname := range []string{"botPolicies.json", "botPolicies.yaml"} {
t.Run(fname, func(t *testing.T) {
fin, err := data.BotPolicies.Open(fname)
if err != nil {
t.Fatal(err)
}
defer fin.Close()
if _, err := policy.ParseConfig(fin, fname, 4); err != nil {
t.Fatal(err)
}
})
}
}
// Regression test for CVE-2025-24369
func TestCVE2025_24369(t *testing.T) {
pol := loadPolicies(t, "")
@@ -107,17 +127,18 @@ func TestCVE2025_24369(t *testing.T) {
}
}
func TestCookieSettings(t *testing.T) {
func TestCookieCustomExpiration(t *testing.T) {
pol := loadPolicies(t, "")
pol.DefaultDifficulty = 0
ckieExpiration := 10 * time.Minute
srv := spawnAnubis(t, Options{
Next: http.NewServeMux(),
Policy: pol,
CookieDomain: "local.cetacean.club",
CookiePartitioned: true,
CookieName: t.Name(),
CookieDomain: "local.cetacean.club",
CookieName: t.Name(),
CookieExpiration: ckieExpiration,
})
ts := httptest.NewServer(internal.RemoteXRealIP(true, "tcp", srv))
@@ -161,12 +182,105 @@ func TestCookieSettings(t *testing.T) {
q.Set("elapsedTime", fmt.Sprint(elapsedTime))
req.URL.RawQuery = q.Encode()
requestRecieveLowerBound := time.Now()
resp, err = cli.Do(req)
requestRecieveUpperBound := time.Now()
if err != nil {
t.Fatalf("can't do challenge passing")
}
if resp.StatusCode != http.StatusFound {
resp.Write(os.Stderr)
t.Errorf("wanted %d, got: %d", http.StatusFound, resp.StatusCode)
}
var ckie *http.Cookie
for _, cookie := range resp.Cookies() {
t.Logf("%#v", cookie)
if cookie.Name == anubis.CookieName {
ckie = cookie
break
}
}
if ckie == nil {
t.Errorf("Cookie %q not found", anubis.CookieName)
return
}
expirationLowerBound := requestRecieveLowerBound.Add(ckieExpiration)
expirationUpperBound := requestRecieveUpperBound.Add(ckieExpiration)
// Since the cookie expiration precision is only to the second due to the Unix() call, we can
// lower the level of expected precision.
if ckie.Expires.Unix() < expirationLowerBound.Unix() || ckie.Expires.Unix() > expirationUpperBound.Unix() {
t.Errorf("cookie expiration is not within the expected range. expected between: %v and %v. got: %v", expirationLowerBound, expirationUpperBound, ckie.Expires)
return
}
}
func TestCookieSettings(t *testing.T) {
pol := loadPolicies(t, "")
pol.DefaultDifficulty = 0
srv := spawnAnubis(t, Options{
Next: http.NewServeMux(),
Policy: pol,
CookieDomain: "local.cetacean.club",
CookiePartitioned: true,
CookieName: t.Name(),
CookieExpiration: anubis.CookieDefaultExpirationTime,
})
ts := httptest.NewServer(internal.RemoteXRealIP(true, "tcp", srv))
defer ts.Close()
cli := &http.Client{
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
}
resp, err := cli.Post(ts.URL+"/.within.website/x/cmd/anubis/api/make-challenge", "", nil)
if err != nil {
t.Fatalf("can't request challenge: %v", err)
}
defer resp.Body.Close()
var chall = struct {
Challenge string `json:"challenge"`
}{}
if err := json.NewDecoder(resp.Body).Decode(&chall); err != nil {
t.Fatalf("can't read challenge response body: %v", err)
}
nonce := 0
elapsedTime := 420
redir := "/"
calculated := ""
calcString := fmt.Sprintf("%s%d", chall.Challenge, nonce)
calculated = internal.SHA256sum(calcString)
req, err := http.NewRequest(http.MethodGet, ts.URL+"/.within.website/x/cmd/anubis/api/pass-challenge", nil)
if err != nil {
t.Fatalf("can't make request: %v", err)
}
q := req.URL.Query()
q.Set("response", calculated)
q.Set("nonce", fmt.Sprint(nonce))
q.Set("redir", redir)
q.Set("elapsedTime", fmt.Sprint(elapsedTime))
req.URL.RawQuery = q.Encode()
requestRecieveLowerBound := time.Now()
resp, err = cli.Do(req)
requestRecieveUpperBound := time.Now()
if err != nil {
t.Fatalf("can't do challenge passing")
}
if resp.StatusCode != http.StatusFound {
resp.Write(os.Stderr)
t.Errorf("wanted %d, got: %d", http.StatusFound, resp.StatusCode)
}
@@ -187,6 +301,15 @@ func TestCookieSettings(t *testing.T) {
t.Errorf("cookie domain is wrong, wanted local.cetacean.club, got: %s", ckie.Domain)
}
expirationLowerBound := requestRecieveLowerBound.Add(anubis.CookieDefaultExpirationTime)
expirationUpperBound := requestRecieveUpperBound.Add(anubis.CookieDefaultExpirationTime)
// Since the cookie expiration precision is only to the second due to the Unix() call, we can
// lower the level of expected precision.
if ckie.Expires.Unix() < expirationLowerBound.Unix() || ckie.Expires.Unix() > expirationUpperBound.Unix() {
t.Errorf("cookie expiration is not within the expected range. expected between: %v and %v. got: %v", expirationLowerBound, expirationUpperBound, ckie.Expires)
return
}
if ckie.Partitioned != srv.opts.CookiePartitioned {
t.Errorf("wanted partitioned flag %v, got: %v", srv.opts.CookiePartitioned, ckie.Partitioned)
}
@@ -197,7 +320,7 @@ func TestCheckDefaultDifficultyMatchesPolicy(t *testing.T) {
fmt.Fprintln(w, "OK")
})
for i := uint32(1); i < 10; i++ {
for i := 1; i < 10; i++ {
t.Run(fmt.Sprint(i), func(t *testing.T) {
anubisPolicy, err := LoadPoliciesOrDefault("", i)
if err != nil {
@@ -235,3 +358,186 @@ func TestCheckDefaultDifficultyMatchesPolicy(t *testing.T) {
})
}
}
func TestBasePrefix(t *testing.T) {
h := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "OK")
})
testCases := []struct {
name string
basePrefix string
path string
expected string
}{
{
name: "no prefix",
basePrefix: "",
path: "/.within.website/x/cmd/anubis/api/make-challenge",
expected: "/.within.website/x/cmd/anubis/api/make-challenge",
},
{
name: "with prefix",
basePrefix: "/myapp",
path: "/myapp/.within.website/x/cmd/anubis/api/make-challenge",
expected: "/myapp/.within.website/x/cmd/anubis/api/make-challenge",
},
{
name: "with prefix and trailing slash",
basePrefix: "/myapp/",
path: "/myapp/.within.website/x/cmd/anubis/api/make-challenge",
expected: "/myapp/.within.website/x/cmd/anubis/api/make-challenge",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Reset the global BasePrefix before each test
anubis.BasePrefix = ""
pol := loadPolicies(t, "")
pol.DefaultDifficulty = 4
srv := spawnAnubis(t, Options{
Next: h,
Policy: pol,
BasePrefix: tc.basePrefix,
})
ts := httptest.NewServer(internal.RemoteXRealIP(true, "tcp", srv))
defer ts.Close()
// Test API endpoint with prefix
resp, err := ts.Client().Post(ts.URL+tc.path, "", nil)
if err != nil {
t.Fatalf("can't request challenge: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Errorf("expected status code %d, got: %d", http.StatusOK, resp.StatusCode)
}
var chall challenge
if err := json.NewDecoder(resp.Body).Decode(&chall); err != nil {
t.Fatalf("can't read challenge response body: %v", err)
}
if chall.Challenge == "" {
t.Errorf("expected non-empty challenge")
}
// Test cookie path when passing challenge
// Find a nonce that produces a hash with the required number of leading zeros
nonce := 0
var calculated string
for {
calcString := fmt.Sprintf("%s%d", chall.Challenge, nonce)
calculated = internal.SHA256sum(calcString)
if strings.HasPrefix(calculated, strings.Repeat("0", pol.DefaultDifficulty)) {
break
}
nonce++
}
elapsedTime := 420
redir := "/"
cli := ts.Client()
cli.CheckRedirect = func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
}
// Construct the correct path for pass-challenge
passChallengePath := tc.path
passChallengePath = passChallengePath[:strings.LastIndex(passChallengePath, "/")+1] + "pass-challenge"
req, err := http.NewRequest(http.MethodGet, ts.URL+passChallengePath, nil)
if err != nil {
t.Fatalf("can't make request: %v", err)
}
q := req.URL.Query()
q.Set("response", calculated)
q.Set("nonce", fmt.Sprint(nonce))
q.Set("redir", redir)
q.Set("elapsedTime", fmt.Sprint(elapsedTime))
req.URL.RawQuery = q.Encode()
resp, err = cli.Do(req)
if err != nil {
t.Fatalf("can't do challenge passing: %v", err)
}
if resp.StatusCode != http.StatusFound {
t.Errorf("wanted %d, got: %d", http.StatusFound, resp.StatusCode)
}
// Check cookie path
var ckie *http.Cookie
for _, cookie := range resp.Cookies() {
if cookie.Name == anubis.CookieName {
ckie = cookie
break
}
}
if ckie == nil {
t.Errorf("Cookie %q not found", anubis.CookieName)
return
}
expectedPath := "/"
if tc.basePrefix != "" {
expectedPath = strings.TrimSuffix(tc.basePrefix, "/") + "/"
}
if ckie.Path != expectedPath {
t.Errorf("cookie path is wrong, wanted %s, got: %s", expectedPath, ckie.Path)
}
})
}
}
func TestCustomStatusCodes(t *testing.T) {
h := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
t.Log(r.UserAgent())
w.WriteHeader(http.StatusOK)
fmt.Fprintln(w, "OK")
})
statusMap := map[string]int{
"ALLOW": 200,
"CHALLENGE": 401,
"DENY": 403,
}
pol := loadPolicies(t, "./testdata/aggressive_403.yaml")
pol.DefaultDifficulty = 4
srv := spawnAnubis(t, Options{
Next: h,
Policy: pol,
})
ts := httptest.NewServer(internal.RemoteXRealIP(true, "tcp", srv))
defer ts.Close()
for userAgent, statusCode := range statusMap {
t.Run(userAgent, func(t *testing.T) {
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, ts.URL, nil)
if err != nil {
t.Fatal(err)
}
req.Header.Set("User-Agent", userAgent)
resp, err := ts.Client().Do(req)
if err != nil {
t.Fatal(err)
}
if resp.StatusCode != statusCode {
t.Errorf("wanted status code %d but got: %d", statusCode, resp.StatusCode)
}
})
}
}

140
lib/config.go Normal file
View File

@@ -0,0 +1,140 @@
package lib
import (
"crypto/ed25519"
"crypto/rand"
"fmt"
"io"
"log/slog"
"net/http"
"os"
"strings"
"time"
"github.com/TecharoHQ/anubis"
"github.com/TecharoHQ/anubis/data"
"github.com/TecharoHQ/anubis/decaymap"
"github.com/TecharoHQ/anubis/internal"
"github.com/TecharoHQ/anubis/internal/dnsbl"
"github.com/TecharoHQ/anubis/internal/ogtags"
"github.com/TecharoHQ/anubis/lib/policy"
"github.com/TecharoHQ/anubis/web"
"github.com/TecharoHQ/anubis/xess"
)
type Options struct {
Next http.Handler
Policy *policy.ParsedConfig
RedirectDomains []string
ServeRobotsTXT bool
PrivateKey ed25519.PrivateKey
CookieExpiration time.Duration
CookieDomain string
CookieName string
CookiePartitioned bool
OGPassthrough bool
OGTimeToLive time.Duration
OGCacheConsidersHost bool
Target string
WebmasterEmail string
BasePrefix string
}
func LoadPoliciesOrDefault(fname string, defaultDifficulty int) (*policy.ParsedConfig, error) {
var fin io.ReadCloser
var err error
if fname != "" {
fin, err = os.Open(fname)
if err != nil {
return nil, fmt.Errorf("can't parse policy file %s: %w", fname, err)
}
} else {
fname = "(data)/botPolicies.yaml"
fin, err = data.BotPolicies.Open("botPolicies.yaml")
if err != nil {
return nil, fmt.Errorf("[unexpected] can't parse builtin policy file %s: %w", fname, err)
}
}
defer func(fin io.ReadCloser) {
err := fin.Close()
if err != nil {
slog.Error("failed to close policy file", "file", fname, "err", err)
}
}(fin)
anubisPolicy, err := policy.ParseConfig(fin, fname, defaultDifficulty)
return anubisPolicy, err
}
func New(opts Options) (*Server, error) {
if opts.PrivateKey == nil {
slog.Debug("opts.PrivateKey not set, generating a new one")
_, priv, err := ed25519.GenerateKey(rand.Reader)
if err != nil {
return nil, fmt.Errorf("lib: can't generate private key: %v", err)
}
opts.PrivateKey = priv
}
anubis.BasePrefix = opts.BasePrefix
result := &Server{
next: opts.Next,
priv: opts.PrivateKey,
pub: opts.PrivateKey.Public().(ed25519.PublicKey),
policy: opts.Policy,
opts: opts,
DNSBLCache: decaymap.New[string, dnsbl.DroneBLResponse](),
OGTags: ogtags.NewOGTagCache(opts.Target, opts.OGPassthrough, opts.OGTimeToLive, opts.OGCacheConsidersHost),
}
mux := http.NewServeMux()
xess.Mount(mux)
// Helper to add global prefix
registerWithPrefix := func(pattern string, handler http.Handler, method string) {
if method != "" {
method = method + " " // methods must end with a space to register with them
}
// Ensure there's no double slash when concatenating BasePrefix and pattern
basePrefix := strings.TrimSuffix(anubis.BasePrefix, "/")
prefix := method + basePrefix
// If pattern doesn't start with a slash, add one
if !strings.HasPrefix(pattern, "/") {
pattern = "/" + pattern
}
mux.Handle(prefix+pattern, handler)
}
// Ensure there's no double slash when concatenating BasePrefix and StaticPath
stripPrefix := strings.TrimSuffix(anubis.BasePrefix, "/") + anubis.StaticPath
registerWithPrefix(anubis.StaticPath, internal.UnchangingCache(internal.NoBrowsing(http.StripPrefix(stripPrefix, http.FileServerFS(web.Static)))), "")
if opts.ServeRobotsTXT {
registerWithPrefix("/robots.txt", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
http.ServeFileFS(w, r, web.Static, "static/robots.txt")
}), "GET")
registerWithPrefix("/.well-known/robots.txt", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
http.ServeFileFS(w, r, web.Static, "static/robots.txt")
}), "GET")
}
registerWithPrefix(anubis.APIPrefix+"make-challenge", http.HandlerFunc(result.MakeChallenge), "POST")
registerWithPrefix(anubis.APIPrefix+"pass-challenge", http.HandlerFunc(result.PassChallenge), "GET")
registerWithPrefix(anubis.APIPrefix+"check", http.HandlerFunc(result.maybeReverseProxyHttpStatusOnly), "")
registerWithPrefix(anubis.APIPrefix+"test-error", http.HandlerFunc(result.TestError), "GET")
registerWithPrefix("/", http.HandlerFunc(result.maybeReverseProxyOrPage), "")
result.mux = mux
return result, nil
}

View File

@@ -2,8 +2,14 @@ package lib
import (
"net/http"
"slices"
"time"
"github.com/TecharoHQ/anubis/internal"
"github.com/TecharoHQ/anubis/lib/policy"
"github.com/TecharoHQ/anubis/web"
"github.com/a-h/templ"
"github.com/TecharoHQ/anubis"
)
@@ -33,3 +39,82 @@ func (t UnixRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
req.URL.Scheme = "http" // make http.Transport happy and avoid an infinite recursion
return t.Transport.RoundTrip(req)
}
func (s *Server) RenderIndex(w http.ResponseWriter, r *http.Request, rule *policy.Bot, returnHTTPStatusOnly bool) {
if returnHTTPStatusOnly {
w.WriteHeader(http.StatusUnauthorized)
w.Write([]byte("Authorization required"))
return
}
lg := internal.GetRequestLogger(r)
challenge := s.challengeFor(r, rule.Challenge.Difficulty)
var ogTags map[string]string = nil
if s.opts.OGPassthrough {
var err error
ogTags, err = s.OGTags.GetOGTags(r.URL, r.Host)
if err != nil {
lg.Error("failed to get OG tags", "err", err)
}
}
component, err := web.BaseWithChallengeAndOGTags("Making sure you're not a bot!", web.Index(), challenge, rule.Challenge, ogTags)
if err != nil {
lg.Error("render failed, please open an issue", "err", err) // This is likely a bug in the template. Should never be triggered as CI tests for this.
s.respondWithError(w, r, "Internal Server Error: please contact the administrator and ask them to look for the logs around \"RenderIndex\"")
return
}
handler := internal.NoStoreCache(templ.Handler(
component,
templ.WithStatus(s.opts.Policy.StatusCodes.Challenge),
))
handler.ServeHTTP(w, r)
}
func (s *Server) RenderBench(w http.ResponseWriter, r *http.Request) {
templ.Handler(
web.Base("Benchmarking Anubis!", web.Bench()),
).ServeHTTP(w, r)
}
func (s *Server) respondWithError(w http.ResponseWriter, r *http.Request, message string) {
s.respondWithStatus(w, r, message, http.StatusInternalServerError)
}
func (s *Server) respondWithStatus(w http.ResponseWriter, r *http.Request, msg string, status int) {
templ.Handler(web.Base("Oh noes!", web.ErrorPage(msg, s.opts.WebmasterEmail)), templ.WithStatus(status)).ServeHTTP(w, r)
}
func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
s.mux.ServeHTTP(w, r)
}
func (s *Server) ServeHTTPNext(w http.ResponseWriter, r *http.Request) {
if s.next == nil {
redir := r.FormValue("redir")
urlParsed, err := r.URL.Parse(redir)
if err != nil {
s.respondWithStatus(w, r, "Redirect URL not parseable", http.StatusBadRequest)
return
}
if (len(urlParsed.Host) > 0 && len(s.opts.RedirectDomains) != 0 && !slices.Contains(s.opts.RedirectDomains, urlParsed.Host)) || urlParsed.Host != r.URL.Host {
s.respondWithStatus(w, r, "Redirect domain not allowed", http.StatusBadRequest)
return
}
if redir != "" {
http.Redirect(w, r, redir, http.StatusFound)
return
}
templ.Handler(
web.Base("You are not a bot!", web.StaticHappy()),
).ServeHTTP(w, r)
} else {
s.next.ServeHTTP(w, r)
}
}

View File

@@ -2,31 +2,18 @@ package policy
import (
"fmt"
"regexp"
"github.com/TecharoHQ/anubis/internal"
"github.com/TecharoHQ/anubis/lib/policy/config"
"github.com/yl2chen/cidranger"
)
type Bot struct {
Name string
UserAgent *regexp.Regexp
Path *regexp.Regexp
Action config.Rule `json:"action"`
Action config.Rule
Challenge *config.ChallengeRules
Ranger cidranger.Ranger
Rules Checker
}
func (b Bot) Hash() (string, error) {
var pathRex string
if b.Path != nil {
pathRex = b.Path.String()
}
var userAgentRex string
if b.UserAgent != nil {
userAgentRex = b.UserAgent.String()
}
return internal.SHA256sum(fmt.Sprintf("%s::%s::%s", b.Name, pathRex, userAgentRex)), nil
func (b Bot) Hash() string {
return internal.SHA256sum(fmt.Sprintf("%s::%s", b.Name, b.Rules.Hash()))
}

201
lib/policy/checker.go Normal file
View File

@@ -0,0 +1,201 @@
package policy
import (
"errors"
"fmt"
"net"
"net/http"
"regexp"
"strings"
"github.com/TecharoHQ/anubis/internal"
"github.com/yl2chen/cidranger"
)
var (
ErrMisconfiguration = errors.New("[unexpected] policy: administrator misconfiguration")
)
type Checker interface {
Check(*http.Request) (bool, error)
Hash() string
}
type CheckerList []Checker
func (cl CheckerList) Check(r *http.Request) (bool, error) {
for _, c := range cl {
ok, err := c.Check(r)
if err != nil {
return ok, err
}
if ok {
return ok, nil
}
}
return false, nil
}
func (cl CheckerList) Hash() string {
var sb strings.Builder
for _, c := range cl {
fmt.Fprintln(&sb, c.Hash())
}
return internal.SHA256sum(sb.String())
}
type RemoteAddrChecker struct {
ranger cidranger.Ranger
hash string
}
func NewRemoteAddrChecker(cidrs []string) (Checker, error) {
ranger := cidranger.NewPCTrieRanger()
var sb strings.Builder
for _, cidr := range cidrs {
_, rng, err := net.ParseCIDR(cidr)
if err != nil {
return nil, fmt.Errorf("%w: range %s not parsing: %w", ErrMisconfiguration, cidr, err)
}
ranger.Insert(cidranger.NewBasicRangerEntry(*rng))
fmt.Fprintln(&sb, cidr)
}
return &RemoteAddrChecker{
ranger: ranger,
hash: internal.SHA256sum(sb.String()),
}, nil
}
func (rac *RemoteAddrChecker) Check(r *http.Request) (bool, error) {
host := r.Header.Get("X-Real-Ip")
if host == "" {
return false, fmt.Errorf("%w: header X-Real-Ip is not set", ErrMisconfiguration)
}
addr := net.ParseIP(host)
if addr == nil {
return false, fmt.Errorf("%w: %s is not an IP address", ErrMisconfiguration, host)
}
ok, err := rac.ranger.Contains(addr)
if err != nil {
return false, err
}
if ok {
return true, nil
}
return false, nil
}
func (rac *RemoteAddrChecker) Hash() string {
return rac.hash
}
type HeaderMatchesChecker struct {
header string
regexp *regexp.Regexp
hash string
}
func NewUserAgentChecker(rexStr string) (Checker, error) {
return NewHeaderMatchesChecker("User-Agent", rexStr)
}
func NewHeaderMatchesChecker(header, rexStr string) (Checker, error) {
rex, err := regexp.Compile(strings.TrimSpace(rexStr))
if err != nil {
return nil, fmt.Errorf("%w: regex %s failed parse: %w", ErrMisconfiguration, rexStr, err)
}
return &HeaderMatchesChecker{strings.TrimSpace(header), rex, internal.SHA256sum(header + ": " + rexStr)}, nil
}
func (hmc *HeaderMatchesChecker) Check(r *http.Request) (bool, error) {
if hmc.regexp.MatchString(r.Header.Get(hmc.header)) {
return true, nil
}
return false, nil
}
func (hmc *HeaderMatchesChecker) Hash() string {
return hmc.hash
}
type PathChecker struct {
regexp *regexp.Regexp
hash string
}
func NewPathChecker(rexStr string) (Checker, error) {
rex, err := regexp.Compile(strings.TrimSpace(rexStr))
if err != nil {
return nil, fmt.Errorf("%w: regex %s failed parse: %w", ErrMisconfiguration, rexStr, err)
}
return &PathChecker{rex, internal.SHA256sum(rexStr)}, nil
}
func (pc *PathChecker) Check(r *http.Request) (bool, error) {
if pc.regexp.MatchString(r.URL.Path) {
return true, nil
}
return false, nil
}
func (pc *PathChecker) Hash() string {
return pc.hash
}
func NewHeaderExistsChecker(key string) Checker {
return headerExistsChecker{strings.TrimSpace(key)}
}
type headerExistsChecker struct {
header string
}
func (hec headerExistsChecker) Check(r *http.Request) (bool, error) {
if r.Header.Get(hec.header) != "" {
return true, nil
}
return false, nil
}
func (hec headerExistsChecker) Hash() string {
return internal.SHA256sum(hec.header)
}
func NewHeadersChecker(headermap map[string]string) (Checker, error) {
var result CheckerList
var errs []error
for key, rexStr := range headermap {
if rexStr == ".*" {
result = append(result, headerExistsChecker{strings.TrimSpace(key)})
continue
}
rex, err := regexp.Compile(strings.TrimSpace(rexStr))
if err != nil {
errs = append(errs, fmt.Errorf("while compiling header %s regex %s: %w", key, rexStr, err))
continue
}
result = append(result, &HeaderMatchesChecker{key, rex, internal.SHA256sum(key + ": " + rexStr)})
}
if len(errs) != 0 {
return nil, errors.Join(errs...)
}
return result, nil
}

200
lib/policy/checker_test.go Normal file
View File

@@ -0,0 +1,200 @@
package policy
import (
"errors"
"net/http"
"testing"
)
func TestRemoteAddrChecker(t *testing.T) {
for _, tt := range []struct {
name string
cidrs []string
ip string
ok bool
err error
}{
{
name: "match_ipv4",
cidrs: []string{"0.0.0.0/0"},
ip: "1.1.1.1",
ok: true,
err: nil,
},
{
name: "match_ipv6",
cidrs: []string{"::/0"},
ip: "cafe:babe::",
ok: true,
err: nil,
},
{
name: "not_match_ipv4",
cidrs: []string{"1.1.1.1/32"},
ip: "1.1.1.2",
ok: false,
err: nil,
},
{
name: "not_match_ipv6",
cidrs: []string{"cafe:babe::/128"},
ip: "cafe:babe:4::/128",
ok: false,
err: nil,
},
{
name: "no_ip_set",
cidrs: []string{"::/0"},
ok: false,
err: ErrMisconfiguration,
},
{
name: "invalid_ip",
cidrs: []string{"::/0"},
ip: "According to all natural laws of aviation",
ok: false,
err: ErrMisconfiguration,
},
} {
t.Run(tt.name, func(t *testing.T) {
rac, err := NewRemoteAddrChecker(tt.cidrs)
if err != nil && !errors.Is(err, tt.err) {
t.Fatalf("creating RemoteAddrChecker failed: %v", err)
}
r, err := http.NewRequest(http.MethodGet, "/", nil)
if err != nil {
t.Fatalf("can't make request: %v", err)
}
if tt.ip != "" {
r.Header.Add("X-Real-Ip", tt.ip)
}
ok, err := rac.Check(r)
if tt.ok != ok {
t.Errorf("ok: %v, wanted: %v", ok, tt.ok)
}
if err != nil && tt.err != nil && !errors.Is(err, tt.err) {
t.Errorf("err: %v, wanted: %v", err, tt.err)
}
})
}
}
func TestHeaderMatchesChecker(t *testing.T) {
for _, tt := range []struct {
name string
header string
rexStr string
reqHeaderKey string
reqHeaderValue string
ok bool
err error
}{
{
name: "match",
header: "Cf-Worker",
rexStr: ".*",
reqHeaderKey: "Cf-Worker",
reqHeaderValue: "true",
ok: true,
err: nil,
},
{
name: "not_match",
header: "Cf-Worker",
rexStr: "false",
reqHeaderKey: "Cf-Worker",
reqHeaderValue: "true",
ok: false,
err: nil,
},
{
name: "not_present",
header: "Cf-Worker",
rexStr: "foobar",
reqHeaderKey: "Something-Else",
reqHeaderValue: "true",
ok: false,
err: nil,
},
{
name: "invalid_regex",
rexStr: "a(b",
err: ErrMisconfiguration,
},
} {
t.Run(tt.name, func(t *testing.T) {
hmc, err := NewHeaderMatchesChecker(tt.header, tt.rexStr)
if err != nil && !errors.Is(err, tt.err) {
t.Fatalf("creating HeaderMatchesChecker failed")
}
if tt.err != nil && hmc == nil {
return
}
r, err := http.NewRequest(http.MethodGet, "/", nil)
if err != nil {
t.Fatalf("can't make request: %v", err)
}
r.Header.Set(tt.reqHeaderKey, tt.reqHeaderValue)
ok, err := hmc.Check(r)
if tt.ok != ok {
t.Errorf("ok: %v, wanted: %v", ok, tt.ok)
}
if err != nil && tt.err != nil && !errors.Is(err, tt.err) {
t.Errorf("err: %v, wanted: %v", err, tt.err)
}
})
}
}
func TestHeaderExistsChecker(t *testing.T) {
for _, tt := range []struct {
name string
header string
reqHeader string
ok bool
}{
{
name: "match",
header: "Authorization",
reqHeader: "Authorization",
ok: true,
},
{
name: "not_match",
header: "Authorization",
reqHeader: "Authentication",
},
} {
t.Run(tt.name, func(t *testing.T) {
hec := headerExistsChecker{tt.header}
r, err := http.NewRequest(http.MethodGet, "/", nil)
if err != nil {
t.Fatalf("can't make request: %v", err)
}
r.Header.Set(tt.reqHeader, "hunter2")
ok, err := hec.Check(r)
if tt.ok != ok {
t.Errorf("ok: %v, wanted: %v", ok, tt.ok)
}
if err != nil {
t.Errorf("err: %v", err)
}
})
}
}

View File

@@ -1,4 +1,4 @@
package lib
package policy
import (
"log/slog"
@@ -16,10 +16,3 @@ func (cr CheckResult) LogValue() slog.Value {
slog.String("name", cr.Name),
slog.String("rule", string(cr.Rule)))
}
func cr(name string, rule config.Rule) CheckResult {
return CheckResult{
Name: name,
Rule: rule,
}
}

View File

@@ -3,19 +3,33 @@ package config
import (
"errors"
"fmt"
"io"
"io/fs"
"net"
"net/http"
"os"
"regexp"
"strings"
"github.com/TecharoHQ/anubis/data"
"k8s.io/apimachinery/pkg/util/yaml"
)
var (
ErrNoBotRulesDefined = errors.New("config: must define at least one (1) bot rule")
ErrBotMustHaveName = errors.New("config.Bot: must set name")
ErrBotMustHaveUserAgentOrPath = errors.New("config.Bot: must set either user_agent_regex, path_regex, or remote_addresses")
ErrBotMustHaveUserAgentOrPath = errors.New("config.Bot: must set either user_agent_regex, path_regex, headers_regex, or remote_addresses")
ErrBotMustHaveUserAgentOrPathNotBoth = errors.New("config.Bot: must set either user_agent_regex, path_regex, and not both")
ErrUnknownAction = errors.New("config.Bot: unknown action")
ErrInvalidUserAgentRegex = errors.New("config.Bot: invalid user agent regex")
ErrInvalidPathRegex = errors.New("config.Bot: invalid path regex")
ErrInvalidHeadersRegex = errors.New("config.Bot: invalid headers regex")
ErrInvalidCIDR = errors.New("config.Bot: invalid CIDR")
ErrRegexEndsWithNewline = errors.New("config.Bot: regular expression ends with newline (try >- instead of > in yaml)")
ErrInvalidImportStatement = errors.New("config.ImportStatement: invalid source file")
ErrCantSetBotAndImportValuesAtOnce = errors.New("config.BotOrImport: can't set bot rules and import values at the same time")
ErrMustSetBotOrImportRules = errors.New("config.BotOrImport: rule definition is invalid, you must set either bot rules or an import statement, not both")
ErrStatusCodeNotValid = errors.New("config.StatusCode: status code not valid, must be between 100 and 599")
)
type Rule string
@@ -31,20 +45,37 @@ const (
type Algorithm string
const (
AlgorithmUnknown Algorithm = ""
AlgorithmFast Algorithm = "fast"
AlgorithmSlow Algorithm = "slow"
AlgorithmArgon2ID Algorithm = "argon2id"
AlgorithmSHA256 Algorithm = "sha256"
AlgorithmUnknown Algorithm = ""
AlgorithmFast Algorithm = "fast"
AlgorithmSlow Algorithm = "slow"
)
type BotConfig struct {
Name string `json:"name"`
UserAgentRegex *string `json:"user_agent_regex"`
PathRegex *string `json:"path_regex"`
Action Rule `json:"action"`
RemoteAddr []string `json:"remote_addresses"`
Challenge *ChallengeRules `json:"challenge,omitempty"`
Name string `json:"name"`
UserAgentRegex *string `json:"user_agent_regex"`
PathRegex *string `json:"path_regex"`
HeadersRegex map[string]string `json:"headers_regex"`
Action Rule `json:"action"`
RemoteAddr []string `json:"remote_addresses"`
Challenge *ChallengeRules `json:"challenge,omitempty"`
}
func (b BotConfig) Zero() bool {
for _, cond := range []bool{
b.Name != "",
b.UserAgentRegex != nil,
b.PathRegex != nil,
len(b.HeadersRegex) != 0,
b.Action != "",
len(b.RemoteAddr) != 0,
b.Challenge != nil,
} {
if cond {
return false
}
}
return true
}
func (b BotConfig) Valid() error {
@@ -54,7 +85,7 @@ func (b BotConfig) Valid() error {
errs = append(errs, ErrBotMustHaveName)
}
if b.UserAgentRegex == nil && b.PathRegex == nil && len(b.RemoteAddr) == 0 {
if b.UserAgentRegex == nil && b.PathRegex == nil && len(b.RemoteAddr) == 0 && len(b.HeadersRegex) == 0 {
errs = append(errs, ErrBotMustHaveUserAgentOrPath)
}
@@ -63,17 +94,41 @@ func (b BotConfig) Valid() error {
}
if b.UserAgentRegex != nil {
if strings.HasSuffix(*b.UserAgentRegex, "\n") {
errs = append(errs, fmt.Errorf("%w: user agent regex: %q", ErrRegexEndsWithNewline, *b.UserAgentRegex))
}
if _, err := regexp.Compile(*b.UserAgentRegex); err != nil {
errs = append(errs, ErrInvalidUserAgentRegex, err)
}
}
if b.PathRegex != nil {
if strings.HasSuffix(*b.PathRegex, "\n") {
errs = append(errs, fmt.Errorf("%w: path regex: %q", ErrRegexEndsWithNewline, *b.PathRegex))
}
if _, err := regexp.Compile(*b.PathRegex); err != nil {
errs = append(errs, ErrInvalidPathRegex, err)
}
}
if len(b.HeadersRegex) > 0 {
for name, expr := range b.HeadersRegex {
if name == "" {
continue
}
if strings.HasSuffix(expr, "\n") {
errs = append(errs, fmt.Errorf("%w: header %s regex: %q", ErrRegexEndsWithNewline, name, expr))
}
if _, err := regexp.Compile(expr); err != nil {
errs = append(errs, ErrInvalidHeadersRegex, err)
}
}
}
if len(b.RemoteAddr) > 0 {
for _, cidr := range b.RemoteAddr {
if _, _, err := net.ParseCIDR(cidr); err != nil {
@@ -103,8 +158,8 @@ func (b BotConfig) Valid() error {
}
type ChallengeRules struct {
Difficulty uint32 `json:"difficulty"`
ReportAs uint32 `json:"report_as"`
Difficulty int `json:"difficulty"`
ReportAs int `json:"report_as"`
Algorithm Algorithm `json:"algorithm"`
}
@@ -126,7 +181,7 @@ func (cr ChallengeRules) Valid() error {
}
switch cr.Algorithm {
case AlgorithmFast, AlgorithmSlow, AlgorithmArgon2ID, AlgorithmSHA256, AlgorithmUnknown:
case AlgorithmFast, AlgorithmSlow, AlgorithmUnknown:
// do nothing, it's all good
default:
errs = append(errs, fmt.Errorf("%w: %q", ErrChallengeRuleHasWrongAlgorithm, cr.Algorithm))
@@ -139,9 +194,181 @@ func (cr ChallengeRules) Valid() error {
return nil
}
type ImportStatement struct {
Import string `json:"import"`
Bots []BotConfig
}
func (is *ImportStatement) open() (fs.File, error) {
if strings.HasPrefix(is.Import, "(data)/") {
fname := strings.TrimPrefix(is.Import, "(data)/")
fin, err := data.BotPolicies.Open(fname)
return fin, err
}
return os.Open(is.Import)
}
func (is *ImportStatement) load() error {
fin, err := is.open()
if err != nil {
return fmt.Errorf("can't open %s: %w", is.Import, err)
}
defer fin.Close()
var result []BotConfig
if err := yaml.NewYAMLToJSONDecoder(fin).Decode(&result); err != nil {
return fmt.Errorf("can't parse %s: %w", is.Import, err)
}
var errs []error
for _, b := range result {
if err := b.Valid(); err != nil {
errs = append(errs, err)
}
}
if len(errs) != 0 {
return fmt.Errorf("config %s is not valid:\n%w", is.Import, errors.Join(errs...))
}
is.Bots = result
return nil
}
func (is *ImportStatement) Valid() error {
return is.load()
}
type BotOrImport struct {
*BotConfig `json:",inline"`
*ImportStatement `json:",inline"`
}
func (boi *BotOrImport) Valid() error {
if boi.BotConfig != nil && boi.ImportStatement != nil {
return ErrCantSetBotAndImportValuesAtOnce
}
if boi.BotConfig != nil {
return boi.BotConfig.Valid()
}
if boi.ImportStatement != nil {
return boi.ImportStatement.Valid()
}
return ErrMustSetBotOrImportRules
}
type StatusCodes struct {
Challenge int `json:"CHALLENGE"`
Deny int `json:"DENY"`
}
func (sc StatusCodes) Valid() error {
var errs []error
if sc.Challenge == 0 || (sc.Challenge < 100 && sc.Challenge >= 599) {
errs = append(errs, fmt.Errorf("%w: challenge is %d", ErrStatusCodeNotValid, sc.Challenge))
}
if sc.Deny == 0 || (sc.Deny < 100 && sc.Deny >= 599) {
errs = append(errs, fmt.Errorf("%w: deny is %d", ErrStatusCodeNotValid, sc.Deny))
}
if len(errs) != 0 {
return fmt.Errorf("status codes not valid:\n%w", errors.Join(errs...))
}
return nil
}
type fileConfig struct {
Bots []BotOrImport `json:"bots"`
DNSBL bool `json:"dnsbl"`
StatusCodes StatusCodes `json:"status_codes"`
}
func (c fileConfig) Valid() error {
var errs []error
if len(c.Bots) == 0 {
errs = append(errs, ErrNoBotRulesDefined)
}
for _, b := range c.Bots {
if err := b.Valid(); err != nil {
errs = append(errs, err)
}
}
if err := c.StatusCodes.Valid(); err != nil {
errs = append(errs, err)
}
if len(errs) != 0 {
return fmt.Errorf("config is not valid:\n%w", errors.Join(errs...))
}
return nil
}
func Load(fin io.Reader, fname string) (*Config, error) {
var c fileConfig
c.StatusCodes = StatusCodes{
Challenge: http.StatusOK,
Deny: http.StatusOK,
}
if err := yaml.NewYAMLToJSONDecoder(fin).Decode(&c); err != nil {
return nil, fmt.Errorf("can't parse policy config YAML %s: %w", fname, err)
}
if err := c.Valid(); err != nil {
return nil, err
}
result := &Config{
DNSBL: c.DNSBL,
StatusCodes: c.StatusCodes,
}
var validationErrs []error
for _, boi := range c.Bots {
if boi.ImportStatement != nil {
if err := boi.load(); err != nil {
validationErrs = append(validationErrs, err)
continue
}
result.Bots = append(result.Bots, boi.ImportStatement.Bots...)
}
if boi.BotConfig != nil {
if err := boi.BotConfig.Valid(); err != nil {
validationErrs = append(validationErrs, err)
continue
}
result.Bots = append(result.Bots, *boi.BotConfig)
}
}
if len(validationErrs) > 0 {
return nil, fmt.Errorf("errors validating policy config %s: %w", fname, errors.Join(validationErrs...))
}
return result, nil
}
type Config struct {
Bots []BotConfig `json:"bots"`
DNSBL bool `json:"dnsbl"`
Bots []BotConfig
DNSBL bool
StatusCodes StatusCodes
}
func (c Config) Valid() error {

View File

@@ -1,11 +1,14 @@
package config
import (
"encoding/json"
"errors"
"io/fs"
"os"
"path/filepath"
"testing"
"github.com/TecharoHQ/anubis/data"
"k8s.io/apimachinery/pkg/util/yaml"
)
func p[V any](v V) *V { return &v }
@@ -87,6 +90,18 @@ func TestBotValid(t *testing.T) {
},
err: ErrInvalidPathRegex,
},
{
name: "invalid headers regex",
bot: BotConfig{
Name: "mozilla-ua",
Action: RuleChallenge,
HeadersRegex: map[string]string{
"Content-Type": "a(b",
},
PathRegex: p("a(b"),
},
err: ErrInvalidHeadersRegex,
},
{
name: "challenge difficulty too low",
bot: BotConfig{
@@ -206,13 +221,69 @@ func TestConfigValidKnownGood(t *testing.T) {
}
defer fin.Close()
var c Config
if err := json.NewDecoder(fin).Decode(&c); err != nil {
t.Fatalf("can't decode file: %v", err)
c, err := Load(fin, st.Name())
if err != nil {
t.Fatal(err)
}
if err := c.Valid(); err != nil {
t.Fatal(err)
t.Error(err)
}
if len(c.Bots) == 0 {
t.Error("wanted more than 0 bots, got zero")
}
})
}
}
func TestImportStatement(t *testing.T) {
type testCase struct {
name string
importPath string
err error
}
var tests []testCase
for _, folderName := range []string{
"apps",
"bots",
"common",
"crawlers",
} {
if err := fs.WalkDir(data.BotPolicies, folderName, func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
if d.IsDir() {
return nil
}
tests = append(tests, testCase{
name: "(data)/" + path,
importPath: "(data)/" + path,
err: nil,
})
return nil
}); err != nil {
t.Fatal(err)
}
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
is := &ImportStatement{
Import: tt.importPath,
}
if err := is.Valid(); err != nil {
t.Errorf("validation error: %v", err)
}
if len(is.Bots) == 0 {
t.Error("wanted bot definitions, but got none")
}
})
}
@@ -233,8 +304,8 @@ func TestConfigValidBad(t *testing.T) {
}
defer fin.Close()
var c Config
if err := json.NewDecoder(fin).Decode(&c); err != nil {
var c fileConfig
if err := yaml.NewYAMLToJSONDecoder(fin).Decode(&c); err != nil {
t.Fatalf("can't decode file: %v", err)
}
@@ -246,3 +317,49 @@ func TestConfigValidBad(t *testing.T) {
})
}
}
func TestBotConfigZero(t *testing.T) {
var b BotConfig
if !b.Zero() {
t.Error("zero value BotConfig is not zero value")
}
b.Name = "hi"
if b.Zero() {
t.Error("BotConfig with name is zero value")
}
b.UserAgentRegex = p(".*")
if b.Zero() {
t.Error("BotConfig with user agent regex is zero value")
}
b.PathRegex = p(".*")
if b.Zero() {
t.Error("BotConfig with path regex is zero value")
}
b.HeadersRegex = map[string]string{"hi": "there"}
if b.Zero() {
t.Error("BotConfig with headers regex is zero value")
}
b.Action = RuleAllow
if b.Zero() {
t.Error("BotConfig with action is zero value")
}
b.RemoteAddr = []string{"::/0"}
if b.Zero() {
t.Error("BotConfig with remote addresses is zero value")
}
b.Challenge = &ChallengeRules{
Difficulty: 4,
ReportAs: 4,
Algorithm: AlgorithmFast,
}
if b.Zero() {
t.Error("BotConfig with challenge rules is zero value")
}
}

View File

@@ -9,6 +9,13 @@
"name": "user-agent-bad",
"user_agent_regex": "a(b",
"action": "DENY"
},
{
"name": "headers-bad",
"headers": {
"Accept-Encoding": "a(b"
},
"action": "DENY"
}
]
}

View File

@@ -0,0 +1,7 @@
bots:
- name: path-bad
path_regex: "a(b"
action: DENY
- name: user-agent-bad
user_agent_regex: "a(b"
action: DENY

View File

@@ -0,0 +1,10 @@
{
"bots": [
{
"import": "(data)/bots/ai-robots-txt.yaml",
"name": "generic-browser",
"user_agent_regex": "Mozilla|Opera\n",
"action": "CHALLENGE"
}
]
}

View File

@@ -0,0 +1,6 @@
bots:
- import: (data)/bots/ai-robots-txt.yaml
name: generic-browser
user_agent_regex: >
Mozilla|Opera
action: CHALLENGE

View File

@@ -0,0 +1,7 @@
{
"bots": [
{
"import": "(data)/does-not-exist-fake-file.yaml"
}
]
}

View File

@@ -0,0 +1,2 @@
bots:
- import: (data)/does-not-exist-fake-file.yaml

View File

@@ -0,0 +1 @@
bots: []

View File

@@ -0,0 +1 @@
{}

Some files were not shown because too many files have changed in this diff Show More