Compare commits

...

343 Commits

Author SHA1 Message Date
Jason Cameron
2b44b29013 Revert "build(deps): bump the gomod group across 1 directory with 3 updates (…"
This reverts commit ebad69a4e1.
2026-01-03 19:08:52 -05:00
dependabot[bot]
ebad69a4e1 build(deps): bump the gomod group across 1 directory with 3 updates (#1370)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jason Cameron <jason.cameron@stanwith.me>
2026-01-03 19:06:05 -05:00
lif
71147b4857 fix: respect Accept-Language quality factors in language detection (#1380)
The Accept-Language header parsing was not correctly handling quality
factors. When a browser sends "en-GB,de-DE;q=0.5", the expected behavior
is to prefer English (q=1.0 by default) over German (q=0.5).

The fix uses golang.org/x/text/language.ParseAcceptLanguage to properly
parse and sort language preferences by quality factor. It also adds base
language fallbacks (e.g., "en" for "en-GB") to ensure regional variants
match their parent languages when no exact match exists.

Fixes #1022

Signed-off-by: majiayu000 <1835304752@qq.com>
2026-01-02 08:01:43 -05:00
lif
cee7871ef8 fix: update SSL Labs IP addresses (#1377)
Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: Jason Cameron <jason.cameron@stanwith.me>
2026-01-01 23:21:31 -05:00
Jason Cameron
26d258fb94 Update check-spelling metadata (#1379) 2026-01-01 23:02:15 +00:00
Xe Iaso
80a8e0a8ae chore: add Databento as diamond tier sponsor
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-12-30 10:56:58 -05:00
Xe Iaso
359613f35a feat: iplist2rule utility command (#1373)
* feat: iplist2rule utility command

Assisted-By: GLM 4.7 via Claude Code
Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: fix spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: fix spelling again

Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(iplist2rule): add comment describing how rule was generated

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: add iplist2rule docs

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: fix spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-12-29 17:10:17 +00:00
Xe Iaso
1d8e98c5ec test(nginx): fix tests to work in GHA (#1372)
* test(nginx): fix tests to work in GHA

Closes: #1371
Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(test): does this work lol

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(test): does this other thing work lol

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(test): pki folder location

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Jason Cameron <git@jasoncameron.dev>
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-12-28 23:59:48 -05:00
Jason Cameron
880020095c fix(test): remove interactive flag from nginx smoke test docker run command (#1371) 2025-12-29 03:14:50 +00:00
dependabot[bot]
f5728e96a1 build(deps-dev): bump esbuild from 0.27.1 to 0.27.2 in the npm group (#1368)
Co-authored-by: Jason Cameron <git@jsn.cam>
2025-12-28 22:07:44 -05:00
dependabot[bot]
bcf525dbcf build(deps): bump the github-actions group with 3 updates (#1369)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-12-28 22:04:16 -05:00
Xe Iaso
d748dc9da8 test: basic nginx smoke test (#1365)
* docs: split nginx configuration files to their own directory

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test: add nginx config smoke test based on the config in the docs

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-12-28 23:18:25 +00:00
p0008874
9b210d795e docs(known-instances): Alphabetical order + Add Valve Corporation (#1352)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-12-26 01:05:26 +00:00
The Ninth
e084e5011e feat(localization): add Polish language translation (#1363)
(cherry picked from commit 1f9c2272e6)

Co-authored-by: bplajzer <b.plajzerr@gmail.com>
2025-12-25 15:14:04 -05:00
dependabot[bot]
2532478abd build(deps): bump the github-actions group with 4 updates (#1355)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-12-24 01:02:48 -05:00
Xe Iaso
6d9c0abe74 chore: tag v1.24.0
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-12-23 21:17:59 -05:00
Xe Iaso
a37068a423 fix(default-config): remove browser detection logic (#1360)
Looks like these rules don't work anymore.

Closes: #1353

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-12-24 02:13:54 +00:00
Xe Iaso
9d9be61c24 fix(default-config): must-accept-rule on browsers only (#1350)
TIL docker clients don't include the Accept header all the time. I would
have thought they did that. Oops.

Closes: #1346

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-12-19 20:42:24 +00:00
Michael
535ed74b17 i18n(de): improve consistency and wording (#1348)
- Use consistent informal address (fix simplified_explanation)
- Translate "protected_from" ("From" → "Von")
- Standardize "Webseite" → "Website"
- Use more natural phrasing:
  - "Berechnung wird durchgeführt" → "Berechnung läuft"
  - "Zur Hauptseite" → "Zur Startseite"
  - Replace awkward "sozialen Vertrag" phrasing
- "Fingerabdruckerkennung" → "Browser-Fingerprinting" (more common)
- Improve sentence structure and punctuation

Signed-off-by: Michael <87752300+michi-onl@users.noreply.github.com>
2025-12-19 00:29:49 +00:00
Xe Iaso
ba8a1b7caf fix(honeypot/naive): right, we want the client IP, not the load balancer IP
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-12-16 04:44:59 -05:00
Xe Iaso
40afc13d7f fix(honeypot/naive): implement better IP parsing logic
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-12-16 04:32:45 -05:00
Xe Iaso
122e4bc072 feat: first implementation of honeypot logic (#1342)
* feat: first implementation of honeypot logic

This is a bit of an experiment, stick with me.

The core idea here is that badly written crawlers are that: badly
written. They look for anything that contains `<a href="whatever" />`
tags and will blindly use those values to recurse. This takes advantage
of that by hiding a link in a `<script>` tag like this:

```html
<script type="ignore"><a href="/bots-only">Don't click</a></script>
```

Browsers will ignore it because they have no handler for the "ignore"
script type.

This current draft is very unoptimized (it takes like 7 seconds to
generate a page on my tower), however switching spintax libraries will
make this much faster.

The hope is to make this pluggable with WebAssembly such that we force
administrators to choose a storage method. First we crawl before we
walk.

The AI involvement in this commit is limited to the spintax in
affirmations.txt, spintext.txt, and titles.txt. This generates a bunch
of "pseudoprofound bullshit" like the following:

> This Restoration to Balance & Alignment
>
> There's a moment when creators are being called to realize that the work
> can't be reduced to results, but about energy. We don't innovate products
> by pushing harder, we do it by holding the vision. Because momentum can't
> be forced, it unfolds over time when culture are moving in the same
> direction. We're being invited into a paradigm shift in how we think
> about innovation. [...]

This is intended to "look" like normal article text. As this is a first
draft, this sucks and will be improved upon.

Assisted-by: GLM 4.6, ChatGPT, GPT-OSS 120b
Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(honeypot/naive): optimize hilariously

Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(honeypot/naive): attempt to automatically filter out based on crawling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(lib): use mazeGen instead of bsGen

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: add honeypot docs

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(test): go mod tidy

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: fix spelling metadata

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-12-16 04:14:29 -05:00
dependabot[bot]
cb91145352 build(deps): bump the gomod group across 1 directory with 6 updates (#1341)
Co-authored-by: Jason Cameron <jason.cameron@stanwith.me>
2025-12-15 02:43:18 +00:00
dependabot[bot]
5c97d693c1 build(deps): bump the github-actions group across 1 directory with 4 updates (#1340)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-12-15 02:34:45 +00:00
dependabot[bot]
988906bb79 build(deps): bump the npm group with 2 updates (#1339)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-12-14 21:29:42 -05:00
Xe Iaso
9c54aa852f chore: v1.24.0-pre1
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-12-02 07:58:29 -05:00
dependabot[bot]
cb689ee55b build(deps): bump the gomod group with 5 updates (#1316)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-12-01 03:14:20 +00:00
dependabot[bot]
071b836741 build(deps): bump the github-actions group with 3 updates (#1317)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-11-30 22:12:30 -05:00
Jason Cameron
bbdeee00f7 fix: pin Node.js and Go versions in CI configuration files (#1318)
fixes cache poisoning issues
2025-12-01 03:03:39 +00:00
dependabot[bot]
21d7753b1c build(deps): bump actions-hub/kubectl in the github-actions group (#1303)
Bumps the github-actions group with 1 update: [actions-hub/kubectl](https://github.com/actions-hub/kubectl).


Updates `actions-hub/kubectl` from 1.34.1 to 1.34.2
- [Release notes](https://github.com/actions-hub/kubectl/releases)
- [Commits](f14933a23b...1d2c1e96fe)

---
updated-dependencies:
- dependency-name: actions-hub/kubectl
  dependency-version: 1.34.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
...

Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-11-30 13:53:32 -05:00
tbodt
43b8658bfd Show how to use subrequest auth with Caddy (#1312)
Signed-off-by: tbodt <tblodt@icloud.com>
2025-11-27 09:04:28 -05:00
The Ninth
00fa939acf Implement FCrDNS and other DNS features (#1308)
* Implement FCrDNS and other DNS features

* Redesign DNS cache and methods

* Fix DNS cache

* Rename regexSafe arg

* Alter verifyFCrDNS(addr) behaviour

* Remove unused dnsCache field from Server struct

* Upd expressions docs

* Update docs/docs/CHANGELOG.md

Signed-off-by: Xe Iaso <me@xeiaso.net>

* refactor(dns): simplify FCrDNS logging

* docs: clarify verifyFCrDNS behavior

Add a note to the documentation for `verifyFCrDNS` to clarify that it returns true when no PTR records are found for the given IP address.

* fix(dns): Improve FCrDNS error handling and tests

The `VerifyFCrDNS` function previously ignored errors returned from reverse DNS lookups. This could lead to incorrect passes when a DNS failure (other than a simple 'not found') occurred. This change ensures that any error from a reverse lookup will cause the FCrDNS check to fail.

The test suite for FCrDNS has been updated to reflect this change. The mock DNS lookups now simulate both 'not found' errors and other generic DNS errors. The test cases have been updated to ensure that the function behaves correctly in both scenarios, resolving a situation where two test cases were effectively duplicates.

* docs: Update FCrDNS documentation and spelling

Corrected a typo in the `verifyFCrDNS` function documentation.

Additionally, updated the spelling exception list to include new terms and remove redundant entries.

* chore: update spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-11-26 22:24:45 -05:00
Xe Iaso
4ead3ed16e fix(config): deprecate the report_as field for challenges (#1311)
* fix(config): deprecate the report_as field for challenges

This was a bad idea when it was added and it is irresponsible to
continue to have it. It causes more UX problems than it fixes with
slight of hand.

Closes: #1310
Closes: #1307
Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(policy): use the new logger for config validation messages

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(admin/thresholds): remove this report_as setting

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-11-25 23:25:17 -05:00
bplajzer
1f9c2272e6 add Polish language translation (#1309)
* feat(localization): add Polish language translation

* feat(localization): add Polish language translation
2025-11-24 11:55:47 -05:00
Xe Iaso
b11d8132dd chore: add dependabot cooldown (#1302)
* chore: add dependabot cooldown

One of the things I need to worry about with Anubis is the idea that
could pwn a dependency and then get malicious code into prod without
realizing it, a-la Jia Tan. Given that Anubis relies on tools like
Dependabot to manage updating dependencies (good for other reasons),
it makes sense to have Dependabot have a 7 day cooldown for new
versions of dependencies.

This follows the advice from Yossarian on their blog at [1]. Thanks
for the post and easy to copy/paste snippets!

[1]: https://blog.yossarian.net/2025/11/21/We-should-all-be-using-dependency-cooldowns

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: update spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-11-21 19:05:26 +00:00
Xe Iaso
f032d5d0ac feat: writing logs to the filesystem with rotation support (#1299)
* refactor: move lib/policy/config to lib/config

Signed-off-by: Xe Iaso <me@xeiaso.net>

* refactor: don't set global loggers anymore

Ref #864

You were right @kotx, it is a bad idea to set the global logger
instance.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(config): add log sink support

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: update spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(test): go mod tidy

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: update spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(admin/policies): add logging block documentation

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(cmd/anubis): revert this change, it's meant to be its own PR

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: go mod tidy

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test: add file logging smoke test

Assisted-by: GLM 4.6 via Claude Code
Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix: don't expose the old log file time format string

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-11-21 11:46:00 -05:00
Xe Iaso
a709a2b2da ci: add go mod tidy check workflow (#1300)
Assisted-by: GLM 4.6 via Conductor
2025-11-20 16:54:21 +00:00
Lukas Dürrenberger
18d2b4ffff Pass the remote IP to the proxied application (#1298) 2025-11-20 16:32:15 +00:00
Xe Iaso
02989f03d0 feat(store/valkey): Add Redis(R) Sentinel support (#1294)
* feat(internal): add ListOr[T any] type

This is a utility type that lets you decode a JSON T or list of T as a
single value. This will be used with Redis Sentinel config so that you
can specify multiple sentinel addresses.

Ref TecharoHQ/botstopper#24

Assisted-by: GLM 4.6 via Claude Code
Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(store/valkey): add Redis(R) Sentinel support

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

check-spelling run (pull_request) for Xe/redis-sentinel

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

* chore(store/valkey): remove pointless comments

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: document the Redis™ Sentinel configuration options

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(store/valkey): Redis™ Sentinel doesn't require a password

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
2025-11-18 09:55:19 -05:00
Jason Cameron
69e9023cbb docs: clarify usage of PUBLIC_URL and REDIRECT_DOMAINS in installatio… (#1286) 2025-11-17 12:11:34 -05:00
Jason Cameron
1d91bc99f2 fix(ogtags): respect target host/SNI/insecure flags in OG passthrough (#1283) 2025-11-16 21:32:03 -05:00
dependabot[bot]
c70b939651 build(deps-dev): bump esbuild from 0.25.12 to 0.27.0 in the npm group (#1260)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-11-16 19:13:00 -05:00
Jason Cameron
b5c5e07fc2 test(deps): update dependencies to latest versions (#1289) 2025-11-17 00:09:22 +00:00
dependabot[bot]
26fd86bb9a build(deps): bump github.com/testcontainers/testcontainers-go (#1288)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-11-16 23:47:41 +00:00
Jason Cameron
0258f6b59c build(deps): bump go deps (#1287) 2025-11-16 23:41:20 +00:00
Jason Cameron
56170e4af5 fix(tests): make CVE-2025-24369 regression deterministic (#1285)
* fix(tests): make CVE-2025-24369 regression deterministic

* fix(tests): stabilize CVE-2025-24369 regression test by using invalid proof
2025-11-16 18:34:36 -05:00
Jason Cameron
9dd4de6f1f perf: apply fieldalignement (#1284) 2025-11-16 20:43:07 +00:00
kouhaidev
da1890380e docs: use nginx http2 directive instead of deprecated http2 listen parameter (#1251)
Acked-by: Jason Cameron <git@jasoncameron.dev>
2025-11-16 06:59:16 +00:00
Henri Vasserman
6c8629e3ac test: Valkey test improvements for testcontainers (#1280)
* test: testcontainers improvements

Use the endpoint feature to get the connection URL for the container.

There are cases where localhost is not the correct one, for example when DOCKER_HOST is set to another machine.

Also, don't specify the external port for the mapping so a random unused port is used, in cases when there is already Valkey/Redis running as a container and port mapped externally on 6379.

* also remove this hack, doesn't seem necessary.
2025-11-15 14:32:37 -05:00
DerRockWolf
f6bf98fa28 feat(internal/headers): extend debug logging of X-Forwarded-For middlewares (#1269) 2025-11-15 14:31:43 -05:00
Jason Cameron
97ba84e26d Fix challenge validation panic when follow-up hits ALLOW (#1278)
* fix(localization): correct formatting of Swedish loading message

* fix(main): correct formatting and improve readability in main.go

* fix(challenge): add difficulty and policy rule hash to challenge metadata

* docs(challenge): fix panic when validating challenges in privacy-mode browsers
2025-11-14 19:51:48 -05:00
Xe Iaso
68fcc0c44f feat(lib): expose WEIGH matches as prometheus metrics (#1277)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-11-14 17:12:59 -05:00
Esteban Gimbernat
6a7f80e6f5 (feat) Add cluster support to redis/vaultkey store (#1276)
* (feat) Add cluster support to redis/vaultkey store

* (chore) Update CHANGELOG.md

* (fix) Disable maintenance notification on the Valkey store

* (fix) Valkey text fix and allow maintnotifications in spelling.
2025-11-14 08:22:22 -05:00
Henri Vasserman
a5bb6d2751 test: ipv4 in v6 address checking (#1271)
* test: ipv4 in v6 address checking

* fix(lib/policy): unmap 4in6 addresses in RemoteAddrChecker

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: perfect CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-11-14 03:39:50 +00:00
kouhaidev
1e298f5d0e fix(run): mark openrc service script as executable (#1272)
Signed-off-by: Kouhai <66407198+kouhaidev@users.noreply.github.com>
2025-11-13 22:14:21 -05:00
Xe Iaso
a4770956a8 fix(docs): use node:lts (#1274)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-11-14 03:14:00 +00:00
Josh Deprez
316905bf1d Add Renovate to Docker clients (#1267)
Renovate-bot looks at the container APIs directly to learn about new image versions and digests. The [default User-Agent](https://docs.renovatebot.com/self-hosted-configuration/#useragent) is `Renovate/${renovateVersion} (https://github.com/renovatebot/renovate)`
2025-11-12 03:22:00 +00:00
dependabot[bot]
1a12171d74 build(deps): bump the github-actions group with 3 updates (#1262)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-11-09 18:08:06 -08:00
Denys Nykula
4f50d3245e feat(localization): Add Ukrainian language translation (#1044) 2025-11-08 18:46:20 +00:00
Xe Iaso
49c9333359 fix(data): add services folder to embedded filesystem (#1259)
* fix(data): add services folder to embedded filesystem

Also includes a regression test to ensure this does not happen again.

Assisted-By: GLM 4.6 via Claude Code

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-11-08 18:08:48 +00:00
Xe Iaso
c7e4cd1032 fix(data/docker-client): allow some more OCI clients through (#1258)
* fix(data/docker-client): allow some more OCI clients through

Signed-off-by: Xe Iaso <me@xeiaso.net>

* Update metadata

check-spelling run (pull_request) for Xe/more-docker-client-programs

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

* fix(data/docker-client): add containerd

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
2025-11-08 17:50:56 +00:00
Sveinn í Felli
3f81076743 Update is.json (#1241)
Minor spelling and grammar adjustments for Icelandic

Signed-off-by: Sveinn í Felli <sv1@fellsnet.is>
2025-11-08 10:42:03 -05:00
Karorogunso
115f24c33d Add thai language. (#900)
Signed-off-by: Karorogunso <karorogunso@users.noreply.github.com>
2025-11-08 10:41:46 -05:00
Xe Iaso
b836506785 chore: v1.23.1
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-11-07 19:39:07 -05:00
Xe Iaso
cb67c54ac5 ci: add asset build verification workflow (#1254)
* ci: add asset build verification workflow

A CI pass that fails if generated files are out of date.

* chore: npm run assets

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-11-08 00:24:38 +00:00
Xe Iaso
b5ead0a68c fix(data): add ruleset to explicitly allow Docker / OCI clients (#1253)
* fix(data): add ruleset to explicitly allow Docker / OCI clients

Fixes #1252

This is technically a regression as these clients used to work in Anubis
v1.22.0, however it is allowable to make this opt-in as most websites do not
expect to be serving Docker / OCI registry client traffic.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* Update metadata

check-spelling run (pull_request) for Xe/gh-1252/docker-registry-client-fix

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

* test(docker-registry): export the right envvars

Signed-off-by: Xe Iaso <me@xeiaso.net>

* ci: add simdjson dependency for homebrew node

Signed-off-by: Xe Iaso <me@xeiaso.net>

* ci: install go/node without homebrew

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test: use right github commit variable

Signed-off-by: Xe Iaso <me@xeiaso.net>

* ci: remove simdjson dependency

Signed-off-by: Xe Iaso <me@xeiaso.net>

* ci: install ko with an action

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: add OCI registry caveat docs

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
2025-11-08 00:17:25 +00:00
dependabot[bot]
df217d61c8 build(deps): bump the gomod group across 1 directory with 18 updates (#1237)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-11-03 01:24:26 +00:00
dependabot[bot]
cc1d79aec6 build(deps): bump github/codeql-action in the github-actions group (#1239)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-02 20:23:50 -05:00
dependabot[bot]
4d1d7c39eb build(deps-dev): bump the npm group across 1 directory with 3 updates (#1238)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-11-02 20:14:51 -05:00
Xe Iaso
83a83e9691 feat(blog): a short post on how to file abuse reports (#1230)
* feat(blog): add blogpost on how to file abuse reports

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(blog/abuse-reports): fix some wording to read a bit more professionally

Signed-off-by: Xe Iaso <me@xeiaso.net>

* Update metadata

check-spelling run (push) for Xe/blog/abuse-reports

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

* fix(blog/abuse-reports): minor spelling and grammar fixups

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
2025-10-31 13:54:24 -04:00
Xe Iaso
531e1dd7f4 chore(default-config): remove Tencent Cloud block rule (#1227)
Tencent Cloud's abuse team reached out to me recently and asked for this
rule to be removed. Prior attempts to reach out to them to report
abusive traffic have failed, thus leading to this IP space block as a
last resort to try and maintain uptime for systems administrators.

Unfortunately, it's difficult for Tencent's abuse team to take action if
there is a blanket block like this. Let's see if this doesn't cause too
much grief.
2025-10-31 11:20:04 -04:00
Xe Iaso
59f1e36167 fix: SERVE_ROBOTS_TXT works again (#1229)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-10-31 09:08:33 -04:00
Xe Iaso
62c1b80189 chore: tag v1.23.0
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-10-29 20:38:34 -04:00
Xe Iaso
7ed1753fcc fix(lib): close open redirect when in subrequest mode (#1222)
* test(nginx-external-auth): bring up to code standards

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(lib): close open redirect when in subrequest mode

Closes GHSA-cf57-c578-7jvv

Previously Anubis had an open redirect in subrequest auth mode due to an
insufficent fix in GHSA-jhjj-2g64-px7c. This patch adds additional
validation at several steps of the flow to prevent open redirects in
subrequest auth mode as well as implements automated testing to prevent
this from occuring in the future.

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-10-29 16:07:31 -04:00
dependabot[bot]
3dab060bfa build(deps): bump the github-actions group across 1 directory with 6 updates (#1221)
Bumps the github-actions group with 6 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| [actions/cache](https://github.com/actions/cache) | `4.2.4` | `4.3.0` |
| [docker/login-action](https://github.com/docker/login-action) | `3.5.0` | `3.6.0` |
| [actions/upload-artifact](https://github.com/actions/upload-artifact) | `4.6.2` | `5.0.0` |
| [actions/setup-node](https://github.com/actions/setup-node) | `5.0.0` | `6.0.0` |
| [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) | `6.7.0` | `7.1.2` |
| [github/codeql-action](https://github.com/github/codeql-action) | `3.30.3` | `4.31.0` |



Updates `actions/cache` from 4.2.4 to 4.3.0
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](0400d5f644...0057852bfa)

Updates `docker/login-action` from 3.5.0 to 3.6.0
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](184bdaa072...5e57cd1181)

Updates `actions/upload-artifact` from 4.6.2 to 5.0.0
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](ea165f8d65...330a01c490)

Updates `actions/setup-node` from 5.0.0 to 6.0.0
- [Release notes](https://github.com/actions/setup-node/releases)
- [Commits](a0853c2454...2028fbc5c2)

Updates `astral-sh/setup-uv` from 6.7.0 to 7.1.2
- [Release notes](https://github.com/astral-sh/setup-uv/releases)
- [Commits](b75a909f75...85856786d1)

Updates `github/codeql-action` from 3.30.3 to 4.31.0
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](192325c861...4e94bd11f7)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: 4.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions
- dependency-name: docker/login-action
  dependency-version: 3.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions
- dependency-name: actions/upload-artifact
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: actions/setup-node
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: astral-sh/setup-uv
  dependency-version: 7.1.2
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: github/codeql-action
  dependency-version: 4.31.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-26 22:41:24 -04:00
Xe Iaso
ab8b91fc0c chore: v1.23.0-pre2
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-10-26 19:23:16 -04:00
Xe Iaso
168e72088f chore: remove copilot instructions (#1218)
Recent models like GPT-5 have broken these instructions. As such, I
don't think that it's worth having these around anymore. I think that
longer term it may be better to have a policy of having people disclaim
which models they use in commit footers rather than having a "don't use
this tool" policy, which people are just going to work around and
ignore.
2025-10-24 22:42:48 -04:00
Xe Iaso
6b1cd6120f fix!(policy/checker): make List and-like (#1217)
* fix!(policy/checker): make List and-like

This has the potential to break user configs.

Anubis lets you stack multiple checks at once with blocks like this:

```yaml
name: allow-prometheus
action: ALLOW
user_agent_regex: ^prometheus-probe$
remote_addresses:
  - 192.168.2.0/24
```

Previously, this only returned ALLOW if _any one_ of the conditions
matched. This behaviour has changed to only return ALLOW if _all_ of the
conditions match.

I have marked this as a potentially breaking change because I'm
absolutely certain that someone is relying on this behaviour due to
spacebar heating. If this bites you, please let me know ASAP.

Signed-off-by: Xe Iaso <me@xeiaso.net>
Assisted-by: GPT-OSS 120b on local hardware

* fix(policy/checker): more explicit short-circuit

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-10-25 01:25:05 +00:00
Peter Bhat Harkins
d7459de941 link to docs site from readme (#1214) 2025-10-24 15:53:11 -04:00
Xe Iaso
c96c229b68 feat(default-config): block tencent cloud by default (#1216)
* feat(default-config): block tencent cloud by default

This is what happens when you don't have an abuse contact.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: update spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-10-24 19:43:42 +00:00
Xe Iaso
b384ad03cb fix(store/bbolt): remove actorify (#1215)
Closes #1206

This can cause Anubis to have other issues, but at the very least these
issues are at the Anubis level, not the level of your target service so
it's less bad.
2025-10-24 19:28:58 +00:00
Xe Iaso
a4efcef1c9 docs: point get started button to the per-environment setup docs (#1213)
Thanks untitaker!
2025-10-24 19:19:29 +00:00
Xe Iaso
2fc3765340 chore: tag v1.23.0-pre1
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-10-22 14:35:23 +00:00
Sunniva Løvstad
ec2981bf6f locale: Update Nynorsk translation (#1204)
* Update nn.json

Signed-off-by: Sunniva Løvstad <github@turtle.garden>

* Update nn.json (2)

Signed-off-by: Sunniva Løvstad <github@turtle.garden>

* Change awkward wording

Proof of Work → work-proof, that is confirmation that someone is real through work (the computer works)

Signed-off-by: Sunniva Løvstad <github@turtle.garden>

---------

Signed-off-by: Sunniva Løvstad <github@turtle.garden>
2025-10-22 12:46:46 +00:00
Xe Iaso
e3d3195bf2 Xe/show error state (#1203)
* fix(lib): show error message detail when hitting some common flows

Instead of giving the user nothing to go off of, this patch gives them
an opaque blob of ROT-13 encoded base64. The logic is that if you are
smart enough to figure out how to decode this, you're probably smart
enough to either fix your broken client or give it to the adminstrator.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

* Update metadata

check-spelling run (pull_request) for Xe/show-error-state

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
2025-10-21 13:10:27 -04:00
Xe Iaso
25d677cbba fix(algorithms/fast): fix fast challenge on insecure contexts (#1198)
* fix(algorithms/fast): fix fast challenge on insecure contexts

Closes #1192

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-10-17 19:32:24 -04:00
Xe Iaso
00261d049e fix(default-config): sometimes browsers don't send Upgrade-Insecure-Requests (#1189)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-10-13 18:31:14 +00:00
Thomas Anderson
a12b4bb755 changed redirect_domains docs (#1171) 2025-10-13 16:21:56 +00:00
Xe Iaso
4dfc73abd1 fix(lib): de-flake package lib tests (#1187)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-10-13 11:50:13 -04:00
Xe Iaso
ffbbdce3da feat: default config macro (#1186)
* feat(data): add default-config macro

Closes #1152

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test: add default-config-macro smoke test

This uses an AI generated python script to diff the contents of the bots
field of the default configuration file and the
data/meta/default-config.yaml file. It emits a patch showing what needs
to be changed.

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-10-13 11:33:16 -04:00
Xe Iaso
c09c86778d fix(default-config): remove preact challenge (#1184)
* fix(default-config): remove the preact challenge from the default config

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-10-11 09:22:07 -04:00
Xe Iaso
9c47c180d0 fix(default-config): make the default config far less paranoid (#1179)
* test: add httpdebug tool

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(data/clients/git): more strictly match the git client

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(default-config): make the default config far less paranoid

This uses a variety of heuristics to make sure that clients that claim
to be browsers are more likely to behave like browsers. Most of these
are based on the results of a lot of reverse engineering and data
collection from honeypot servers.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
2025-10-11 08:48:12 -04:00
Xe Iaso
d51d32726c fix(lib): serve CSS properly (#1158)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-27 22:16:23 -04:00
dependabot[bot]
ff33982ee9 build(deps): bump github.com/docker/docker (#1131)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 28.3.2+incompatible to 28.3.3+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v28.3.2...v28.3.3)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-version: 28.3.3+incompatible
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-27 14:29:07 -04:00
dependabot[bot]
ec90a8b87d build(deps): bump github.com/ulikunitz/xz from 0.5.12 to 0.5.14 (#1132)
Bumps [github.com/ulikunitz/xz](https://github.com/ulikunitz/xz) from 0.5.12 to 0.5.14.
- [Commits](https://github.com/ulikunitz/xz/compare/v0.5.12...v0.5.14)

---
updated-dependencies:
- dependency-name: github.com/ulikunitz/xz
  dependency-version: 0.5.14
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-27 13:46:23 -04:00
dependabot[bot]
5731477e0a build(deps-dev): bump esbuild from 0.25.9 to 0.25.10 in the npm group (#1147)
Bumps the npm group with 1 update: [esbuild](https://github.com/evanw/esbuild).


Updates `esbuild` from 0.25.9 to 0.25.10
- [Release notes](https://github.com/evanw/esbuild/releases)
- [Changelog](https://github.com/evanw/esbuild/blob/main/CHANGELOG.md)
- [Commits](https://github.com/evanw/esbuild/compare/v0.25.9...v0.25.10)

---
updated-dependencies:
- dependency-name: esbuild
  dependency-version: 0.25.10
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: npm
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-27 13:46:12 -04:00
Xe Iaso
714c85dbc4 fix(lib): enable multiple consecutive slash support (#1155)
* fix(lib): enable multiple consecutive slash support

Closes #754
Closes #808
Closes #815

Apparently more applications use multiple slashes in a row than I
thought. There is no easy way around this other than to do this hacky
fix to avoid net/http#ServeMux's URL cleaning.

* test(double_slash): add sourceware case

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(lib): fix tests for double slash fix

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-27 13:44:46 -04:00
Jamie McClelland
75ea1b60d5 enable auto setting of SNI based on host header (#1129)
With this change, setting targetSNI to 'auto' causes anubis to
use the request host name as the SNI name, allowing multiple sites
to use the same anubis instance and same backend, while still securely
connecting to the backend via https.

See https://github.com/TecharoHQ/anubis/issues/424
2025-09-25 08:08:16 +00:00
violet
1cf03535a5 feat: support reading real client IP from a custom header (#1138)
* feat: support reading real client IP from a custom header

* pr reviews

---------

Co-authored-by: violet <violet@tsukuyomi>
2025-09-25 04:01:24 -04:00
Sunniva Løvstad
c3ed405dbc Update Nynorsk translation (#1143)
* chore: fix capitalisation in bokmål and nynorsk

* stadfest → e-verb

Signed-off-by: Sunniva Løvstad <github@turtle.garden>

---------

Signed-off-by: Sunniva Løvstad <github@turtle.garden>
2025-09-25 04:01:02 -04:00
Xe Iaso
8cdf58c9e6 ci(ssh): re-enable aarch64-16k
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-20 15:30:29 +00:00
Xe Iaso
1c170988c8 fix: mend auth cookie name stutter (#1139)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-19 13:51:11 -04:00
Xe Iaso
9439466ff2 ci(ssh): disable aarch64-16k until my SFP connecter comes in on friday
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-17 16:00:10 +00:00
Richard Mahn
4787aeca51 Add Door43 link to known instances documentation (#1136)
Signed-off-by: Richard Mahn <richmahn@users.noreply.github.com>
2025-09-17 13:11:11 +00:00
Xe Iaso
fb3637df95 feat(metarefresh): randomly use the Refresh header (#1133)
* feat(lib/challenge): expose ResponseWriter to challenge issuers

Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(metarefresh): randomly use the Refresh header

There are several ways to trigger an automatic refresh without
JavaScript. One of them is the "meta refresh" method[1], but the other
is with the Refresh header[2]. Both are semantically identical and
supported with browsers as old as Chrome version 1.

Given that they are basically the same thing, this patch makes Anubis
randomly select between them by using the challenge random data's first
character. This will fire about 50% of the time.

I expect this to have no impact. If this works out fine, then I will
implement some kind of fallback logic for the fast challenge such that
admins can opt into allowing clients with a no-js configuration to pass
the fast challenge. This needs to bake in the oven though.

[1]: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Elements/meta/http-equiv
[2]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Refresh

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(metarefresh): simplify random logic

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
2025-09-16 17:32:13 -04:00
dependabot[bot]
26076b8520 build(deps): bump github.com/docker/docker in /test (#1130)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 28.3.2+incompatible to 28.3.3+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v28.3.2...v28.3.3)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-version: 28.3.3+incompatible
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-16 16:22:28 -04:00
NetSysFire
edb84f03b7 convert issue templates into issue forms (#1115) 2025-09-16 13:14:10 +00:00
Jan Pieter Waagmeester
b2d525bba4 Update nl.json removeing literal translated 'cookie' (koekje) with 'cookie' (#1126)
Signed-off-by: Jan Pieter Waagmeester <jieter@jieter.nl>
2025-09-16 07:53:30 -04:00
dependabot[bot]
00679aed66 build(deps): bump the github-actions group with 3 updates (#1118)
Bumps the github-actions group with 3 updates: [actions-hub/kubectl](https://github.com/actions-hub/kubectl), [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) and [github/codeql-action](https://github.com/github/codeql-action).


Updates `actions-hub/kubectl` from 1.34.0 to 1.34.1
- [Release notes](https://github.com/actions-hub/kubectl/releases)
- [Commits](af345ed727...f14933a23b)

Updates `astral-sh/setup-uv` from 6.6.1 to 6.7.0
- [Release notes](https://github.com/astral-sh/setup-uv/releases)
- [Commits](557e51de59...b75a909f75)

Updates `github/codeql-action` from 3.30.1 to 3.30.3
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](f1f6e5f6af...192325c861)

---
updated-dependencies:
- dependency-name: actions-hub/kubectl
  dependency-version: 1.34.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
- dependency-name: astral-sh/setup-uv
  dependency-version: 6.7.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions
- dependency-name: github/codeql-action
  dependency-version: 3.30.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 18:23:31 -04:00
dependabot[bot]
03299024c5 build(deps): bump the npm group with 2 updates (#1117)
Bumps the npm group with 2 updates: [preact](https://github.com/preactjs/preact) and [postcss-import-url](https://github.com/unlight/postcss-import-url).


Updates `preact` from 10.27.1 to 10.27.2
- [Release notes](https://github.com/preactjs/preact/releases)
- [Commits](https://github.com/preactjs/preact/compare/10.27.1...10.27.2)

Updates `postcss-import-url` from 1.0.0 to 7.2.0
- [Release notes](https://github.com/unlight/postcss-import-url/releases)
- [Changelog](https://github.com/unlight/postcss-import-url/blob/master/CHANGELOG.md)
- [Commits](https://github.com/unlight/postcss-import-url/commits/v7.2.0)

---
updated-dependencies:
- dependency-name: preact
  dependency-version: 10.27.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: npm
- dependency-name: postcss-import-url
  dependency-version: 7.2.0
  dependency-type: direct:development
  update-type: version-update:semver-major
  dependency-group: npm
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-15 18:23:15 -04:00
Anna
f745d37d90 fix(run/openrc): truncate runtime directory before starting Anubis (#1122)
If Anubis is not shut down correctly and there are leftover socket
files, Anubis will refuse to start.

As "checkpath -D" currently does not work as expected
(https://github.com/OpenRC/openrc/issues/335), simply use "rm -rf"
before starting Anubis.

Signed-off-by: Anna @CyberTailor <cyber@sysrq.in>
2025-09-15 07:44:35 -04:00
Xe Iaso
d12993e31d feat(expressions): add contentLength to bot expressions (#1120)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-15 01:41:45 +00:00
Xe Iaso
88b3e457ee docs: update BotStopper docs based on new features
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-14 20:16:43 +00:00
Xe Iaso
bb2b113b63 ci(ssh): don't print uname -av output (#1114)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-14 03:03:46 +00:00
Xe Iaso
6c283d0cd9 ci: add aarch64 for ssh CI (#1112)
* ci: add aarch64 for ssh CI

Signed-off-by: Xe Iaso <me@xeiaso.net>

* ci: better comment aile and t-elos' roles

Signed-off-by: Xe Iaso <me@xeiaso.net>

* ci: fix aile

Signed-off-by: Xe Iaso <me@xeiaso.net>

* ci: update ssh known hosts secret

Signed-off-by: Xe Iaso <me@xeiaso.net>

* ci(ssh): replace raw connection strings with arch-quirks

Signed-off-by: Xe Iaso <me@xeiaso.net>

* ci(ssh): disable this check in PRs again

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-14 00:15:23 +00:00
agoujot
0037e214a1 add link to preact in challenge list (#1111)
Preact was added in 1.22, but it currently isn't listed in the "Challenges" page.

Signed-off-by: agoujot <145840578+agoujot@users.noreply.github.com>
2025-09-13 17:31:36 -04:00
Valentin Lab
29ae2a4b87 feat: fallback to SameSite Lax mode if cookie is not secure (#1105)
Also, will allow to set cookie `SameSite` mode on command line or
environment. Note that `None` mode will be forced to ``Lax`` if
cookie is set to not be secure.

Signed-off-by: Valentin Lab <valentin.lab@kalysto.org>
2025-09-13 10:56:54 +00:00
Xe Iaso
401e18f29f feat(store/bbolt): implement actor pattern (#1107)
* feat(store/bbolt): implement actor pattern

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(internal/actorify): document package

Signed-off-by: Xe Iaso <me@xeiaso.net>

* Update metadata

check-spelling run (pull_request) for Xe/actorify

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
2025-09-12 18:35:22 +00:00
Xe Iaso
63591866aa fix(decaymap): fix lock convoy (#1106)
* fix(decaymap): fix lock convoy

Ref #1103

This uses the actor pattern to delay deletion instead of making things
fight over a lock. It also properly fixes locking logic to prevent the
convoy problem.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-12 16:43:08 +00:00
Xe Iaso
f79d36d21e docs: update CHANGELOG properly
It helps if you save your editor buffer!

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-11 14:07:52 +00:00
Xe Iaso
f5b5243b5e docs: update CHANGELOG
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-11 14:04:32 +00:00
Xe Iaso
2011b83a44 chore: port client-side JS to TypeScript (#1100)
* chore(challenge/preact): port to typescript

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(js/algorithms): port to typescript

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(js/worker): port to typescript

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(web): fix TypeScript build logic

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(web): port bench.mjs to typescript

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(web): port main.mjs to typescript

Signed-off-by: Xe Iaso <me@xeiaso.net>

* Update metadata

check-spelling run (pull_request) for Xe/use-typescript

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

* fix(js/algorithms/fast): handle old browsers

Closes #1082

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
2025-09-11 10:03:10 -04:00
Martin
8ed89a6c6e feat(lib): Add option for adding difficulty field to JWT claims (#1063)
* Add option for difficulty JWT field

* Add DIFFICULTY_IN_JWT option to docs

* Add missing_required_forwarded_headers to lt translation via Google Translate

* docs(CHANGELOG): move CHANGELOG entry to the top

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-09-11 13:50:33 +00:00
Xe Iaso
9430d0e6a5 fix(cmd/containerbuild): support commas in --docker-tags (#1099)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-08 22:19:42 +00:00
Xe Iaso
8b9dafac51 security: npm audit fix for GHSA-hfm8-9jrf-7g9w et. al (#1098)
* security: npm audit fix for GHSA-hfm8-9jrf-7g9w et. al

Closes #1097

I'm not sure that this is required, but I'd sleep better at night not
finding out that it is required the hard way.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: bump postcss version

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-08 14:17:59 -04:00
dependabot[bot]
9997130a7c build(deps): bump the github-actions group with 4 updates (#1093)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-09-07 22:01:27 -04:00
Jason Cameron
e239083944 docs: add reminder for verified signatures in PR template (#1092) 2025-09-07 16:15:26 -04:00
Jason Cameron
abf6c8de57 feat: Warn on missing signing keys when persisting challenges (#1088) 2025-09-07 15:43:58 -04:00
Xe Iaso
7e1b5d9951 fix: demote temporal assurance checks
* fix(challenge): demote temporal assurance to 80% instead of 95%

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(challenge/preact): wait a little longer to be extra safe

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(challenge/metarefresh): wait a little longer to be extra safe

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(CHANGELOG): add fix notes

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-07 16:10:54 +00:00
Xe Iaso
98945fb56f feat(lib/store): add s3api storage backend (#1089)
* feat(lib/store): add s3api storage backend

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(store/s3api): replace fake S3 API keys with the bee movie script

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(store/s3api): fix spelling sin

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(store/s3api): remove vestigal experiment

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(store/s3api): support IsPersistent call

Ref #1088

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(test): go mod tidy

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-07 09:24:14 -04:00
Jason Cameron
82099d9e05 fix(robots2policy): handle multiple user agents under one block (#925) 2025-09-06 22:35:19 -04:00
dependabot[bot]
87c2f1e0e6 build(deps): bump the github-actions group across 1 directory with 8 updates (#1071)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-09-06 22:30:43 -04:00
Jason Cameron
f0199d014f docs: document some missing env vars (#1087) 2025-09-07 01:34:42 +00:00
Jason Cameron
75109f6b73 docs(installation): add SLOG_LEVEL environment variable to configuration (#1086)
* docs(installation): add SLOG_LEVEL environment variable to configuration

* docs(installation): add SLOG_LEVEL environment variable to configuration
2025-09-06 20:59:02 -04:00
Xe Iaso
c43d7ca686 docs(botstopper): add HTML templating support
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-06 23:42:23 +00:00
Xe Iaso
5d5c39e123 chore: v1.22.0
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-06 11:54:36 -04:00
Xe Iaso
d35e47c655 feat: glob matching for redirect domains (#1084)
* feat: glob matching for redirect domains

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-06 15:46:18 +00:00
Xe Iaso
48b49a0190 docs(CHANGELOG): add changelog entry for v1.22.0
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-05 22:42:08 +00:00
Xe Iaso
de94139789 test: ensure FORCED_LANGUAGE works (#1083)
Closes #1077
2025-09-05 22:07:17 +00:00
Rimas Kudelis
fd011d19e2 Updates to lt.json (#1075)
Minor improvements to Lithuanian strings

Signed-off-by: Rimas Kudelis <rimas@kudelis.lt>
2025-09-03 20:07:46 -04:00
Xe Iaso
489abb6b4d chore: release v1.22.0-pre2
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-02 21:31:17 -04:00
Xe Iaso
8da0771647 chore: break AI agents in this code tree (#1065)
Update metadata

check-spelling run (pull_request) for Xe/anti-assistant


on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

Update metadata

check-spelling run (pull_request) for Xe/anti-assistant


on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

chore: fix package builds

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-09-02 10:11:01 -04:00
Xe Iaso
f0bcbe43af ci: fix tests (#1069)
* fix(locailization): fix ci

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(test): fix CI

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(test): fix CI?

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(test): fix CI??

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-31 08:13:00 -04:00
Xe Iaso
f6e077c907 fix(challenge/metarefresh): ensure that clients have waited long enough (#1068)
Some admins have noticed that clients are not waiting the right amount
of time in order to access a resource protected by the metarefresh
challenge. This patch adds a check to make sure that clients have waited
at least 95% (difficulty times 950 milliseconds instead of difficulity
times 1000 milliseconds) of the time they should.

If this scales, maybe time is the best way to go for Anubis in the near
future instead of anything else computational.

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-31 07:51:54 -04:00
/har/per
2704ba95d0 feat(localization): Add Vietnamese translation (#926)
* feat(localization): Add Vietnamese translation

* feat(localization): Add Vietnamese language translation

* feat(localization): Add record to CHANGELOG.md

* feat(localization): Add test case for Vietnamese
2025-08-30 00:23:02 -04:00
Xe Iaso
f6a578787f chore(docs): adjust anubis rules
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-29 23:04:32 +00:00
Xe Iaso
31a654ecb6 chore: introduce issue templates (#939)
* chore: introduce issue templates

* chore: fix spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-29 20:41:39 +00:00
Xe Iaso
1a4b5cadcb chore: fix spelling
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-29 20:31:20 +00:00
Rimas Kudelis
d5cdd21631 feat(localizaton): add Lithuanian locale (#998) 2025-08-29 16:29:57 -04:00
Xe Iaso
0e0847cbeb feat: add 'proof of React' challenge (#1038)
* feat: add 'proof of React' challenge

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(challenge/preact): use JSX fragments

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(challenge/preact): ensure that the client waits as long as it needs to

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: fix spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(challenges/xeact): add noscript warning

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(challenges/xeact): add default loading message

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(challenges/xeact): make a UI render without JS

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(challenges/xeact): use %s here, not %w

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(test/healthcheck): run asset build

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(challenge/preact): fix build in ci

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
2025-08-29 16:09:27 -04:00
Xe Iaso
00afa72c4b fix(blog/cpu-core-odd): make the diagram look decent in light mode
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-29 19:54:22 +00:00
Xe Iaso
eb50f59351 docs(changelog): fix mis-paste
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-29 19:54:02 +00:00
Skyler Mäntysaari
01f55cf552 internal/log: Implement logging of HOST when using subrequest auth (#1027)
* internal/log: Implement logging of HOST when using subrequest auth

The host header wouldn't be set on subrequest auth, so we need to look for X-Forwarded-Host header when logging requests.

* chore: add changelog entry

---------

Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
Co-authored-by: Xe Iaso <xe.iaso@techaro.lol>
2025-08-29 19:04:33 +00:00
OwN-3m-All
99bd06b8c3 Update nginx.mdx - needs port_in_redirect off setting (#1018)
* Update nginx.mdx - needs port_in_redirect off setting

Signed-off-by: OwN-3m-All <own3mall@gmail.com>

* Update metadata

check-spelling run (pull_request) for patch-1

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

---------

Signed-off-by: OwN-3m-All <own3mall@gmail.com>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-08-29 19:03:08 +00:00
TinyServal
d6f1f24e1b docs: document client IP headers and interop with cloudflare (#1034) 2025-08-29 14:54:03 -04:00
Eric Hameleers
6a5485fde9 Alienbob: add Slackware URLs that are now protected by Anubis (#1051)
* Update known-instances.md with Slackware git servers

Signed-off-by: Eric Hameleers <alien@slackware.com>

* Update CHANGELOG.md with Slackware git servers being protected by Anubis

Signed-off-by: Eric Hameleers <alien@slackware.com>

---------

Signed-off-by: Eric Hameleers <alien@slackware.com>
Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
Co-authored-by: Xe Iaso <xe.iaso@techaro.lol>
2025-08-29 14:24:04 +00:00
Alex Samorukov
582181f9b9 Allow to disable keep-alive for the targets not supporting it properly (#1049)
* Allow to disable keep-alive for the targets not supporting it properly

* Add changelog entry
2025-08-29 10:17:03 -04:00
Chris
44264981b5 Fix broken docs link (#1059)
Fixes a broken docs link

Signed-off-by: Chris <398094+phuzion@users.noreply.github.com>
2025-08-28 11:28:25 -03:00
Xe Iaso
21c3e0c469 docs(blog): add post about the odd CPU core count bug (#1058)
* docs(blog): add post about the odd CPU core count bug

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-28 09:32:04 -04:00
phoval
9ddc1eb840 fix: middleware traefik redirect url (#1040) 2025-08-28 07:24:29 -04:00
Xe Iaso
c661bc37d1 fix(worker): constrain nonce value to be a whole integer (#1045)
* fix(worker): constrain nonce value to be a whole integer

Closes #1043

Sometimes the worker could get into a strange state where it has a
decimal nonce, but the server assumes that the nonce can only be a whole
number. This patch constrains the nonce to be a whole number on the
worker end by detecting if the nonce is a decimal number and then
truncating away the decimal portion.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(algorithms/fast): truncate decimal place on number of threads

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-26 14:05:03 -04:00
dependabot[bot]
fb8ce508ee build(deps-dev): bump the npm group across 1 directory with 3 updates (#1032)
Bumps the npm group with 3 updates in the / directory: [cssnano](https://github.com/cssnano/cssnano), [cssnano-preset-advanced](https://github.com/cssnano/cssnano) and [esbuild](https://github.com/evanw/esbuild).


Updates `cssnano` from 7.1.0 to 7.1.1
- [Release notes](https://github.com/cssnano/cssnano/releases)
- [Commits](https://github.com/cssnano/cssnano/compare/cssnano@7.1.0...cssnano@7.1.1)

Updates `cssnano-preset-advanced` from 7.0.8 to 7.0.9
- [Release notes](https://github.com/cssnano/cssnano/releases)
- [Commits](https://github.com/cssnano/cssnano/compare/cssnano-preset-advanced@7.0.8...cssnano-preset-advanced@7.0.9)

Updates `esbuild` from 0.25.8 to 0.25.9
- [Release notes](https://github.com/evanw/esbuild/releases)
- [Changelog](https://github.com/evanw/esbuild/blob/main/CHANGELOG.md)
- [Commits](https://github.com/evanw/esbuild/compare/v0.25.8...v0.25.9)

---
updated-dependencies:
- dependency-name: cssnano
  dependency-version: 7.1.1
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: npm
- dependency-name: cssnano-preset-advanced
  dependency-version: 7.0.9
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: npm
- dependency-name: esbuild
  dependency-version: 0.25.9
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: npm
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-25 00:33:44 +00:00
Jason Cameron
573b0079fb Update metadata (#1031) 2025-08-24 21:40:12 +00:00
Skyler Mäntysaari
d1d631a18a lib/checker: Implement X-Original-URI support (#1015) 2025-08-23 23:14:37 -04:00
Timo Tijhof
f3cd6c9ca4 docs: fix "stored" typo in CHANGELOG.md (#1008)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-08-24 03:12:08 +00:00
Brad Parbs
23772fd3cb s/Wordpress/WordPress in docs (#1020)
Signed-off-by: Brad Parbs <brad@bradparbs.com>
2025-08-24 02:52:09 +00:00
Xe Iaso
a7a61690fc chore: commit for v1.22.0-pre1
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-23 22:39:43 -04:00
Xe Iaso
f5afe8b6c8 chore: release v1.22.0-pre1
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-24 02:38:54 +00:00
Julian Krieger
61682e4987 Update installation.mdx to include a link to the Caddy docs (#993)
* Update installation.mdx to include a link to the Caddy docs

Signed-off-by: Julian Krieger <julian.krieger@hm.edu>

* Update CHANGELOG.md to include documentation changes

Signed-off-by: Julian Krieger <julian.krieger@hm.edu>

---------

Signed-off-by: Julian Krieger <julian.krieger@hm.edu>
2025-08-20 23:02:49 +00:00
Xe Iaso
b0fa256e3e fix(default-config): also block alibaba cloud (#1005)
* fix(default-config): also block alibaba cloud

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-20 23:01:49 +00:00
Xe Iaso
ee55d857eb fix(default-config): block Huawei Cloud (#1004)
* fix(default-config): block Huawei Cloud

Closes #978

Huawei Cloud has been egregious about its scraping. All attempts to
contact their abuse team have failed. If you work for Huawei Cloud,
please raise this issue internally and get the scraping to just stop.

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-20 22:40:07 +00:00
Xe Iaso
993ea8da1b chore: copy SECURITY.md from TecharoHQ/.github
This hopefully makes the security policy harder to miss.

Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
2025-08-20 12:42:02 -04:00
Xe Iaso
6e4e471792 fix(lib): ensure issued challenges don't get double-spent (#1003)
* fix(lib): ensure issued challenges don't get double-spent

Closes #1002

TL;DR: challenge IDs were not validated at time of token issuance. A
dedicated attacker could solve a challenge once and reuse it across
multiple sessons in order to mint additional tokens.

With the advent of store based challenge issuance in #749, this means
that these challenge IDs are only good for 30 minutes. Websites using
the most recent version of Anubis have limited exposure to this problem.

Websites using older versions of Anubis have a much more increased
exposure to this problem and are encouraged to keep this software
updated as often and as frequently as possible.

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-20 12:33:32 -04:00
Xe Iaso
e8dfff6350 feat(blog): add short funding update post (#994)
* feat(blog): add short funding update post

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-18 08:42:27 -04:00
Dryusdan
237a6a98e2 Bump ai.robots.txt to v1.39 (#982) 2025-08-18 06:52:23 -04:00
Xe Iaso
e43999f30c chore: add libreapay
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-16 03:01:59 +00:00
Martin
29d038835f feat(web): Add option for customizable explanation text (#747)
* Add option for customizable explanation text

* Add changes to CHANGELOG.md

* Replace custom explanation text in favor of static simplified text

Also includes translations for the simple_explanation using Google
Translate as a placeholder so tests pass.

---------

Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
Co-authored-by: Xe Iaso <xe.iaso@techaro.lol>
2025-08-14 11:12:55 -04:00
Xe Iaso
39215457e4 fix(locales): remove the word "hack" from the description of Anubis (#973)
This was causing confusion and less technical users were thinking that
websites had been intruded upon, causing them to send me horrible things
over email.

All non-English strings were amended using Google Translate. Please fix
the localization as appropriate.
2025-08-14 01:15:28 +00:00
Martin
ff691dfee8 feat(lib): Add optional restrictions for JWT based on a specific header value (#697)
* Add JWTRestrictionHeader funktionality

* Add JWTRestrictionHeader to docs

* Move JWT_RESTRICTION_HEADER from advanced section to normal one

* Add rull request URL to Changelog

* Set default value of JWT_RESTRICTION_HEADER to X-Real-IP
2025-08-13 23:27:42 +00:00
Mathieu Lu
83503525f2 Update known-instances.md: add lab.civicrm.org (#971)
Signed-off-by: Mathieu Lu <mathieu@civicrm.org>
2025-08-13 19:32:29 +00:00
phoval
a8b7b2ad7b feat: support HTTP redirect for forward authentication middleware in Traefik (#368)
* feat: support HTTP redirect for forward authentication middleware in Traefik

* fix(docs): fix my terrible merge 

Signed-off-by: Jason Cameron <jasoncameron.all@gmail.com>

* chore: fix typo in docs

Signed-off-by: Jason Cameron <jasoncameron.all@gmail.com>

* fix(ci): add forwardauth

Signed-off-by: Jason Cameron <jasoncameron.all@gmail.com>

* chore: improve doc, target must be a space

* chore: changelog

* fix: validate X-Forwarded headers and check redirect domain

* chore: refactor error handling

* fix(doc): cookie traefik

* fix: tests merge

* Update docs/docs/admin/environments/traefik.mdx

Co-authored-by: Henri Vasserman <henv@hot.ee>
Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Jason Cameron <git@jasoncameron.dev>
Signed-off-by: Jason Cameron <jasoncameron.all@gmail.com>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
Co-authored-by: Jason Cameron <jasoncameron.all@gmail.com>
Co-authored-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Henri Vasserman <henv@hot.ee>
2025-08-12 20:59:45 -04:00
Elliot Speck
87651f9506 default pattern fixes (#963)
* feat(checker): allow png/gif/jpg/jpeg/svg favicons as well as ico

* changelog: add updates to keep-internet-working.yaml

* fix(checker): tighten default regex patterns for well-known files

* changelog: add updates to regular expression patterns in keep-internet-working.yaml

---------

Signed-off-by: Elliot Speck <11192354+arcayr@users.noreply.github.com>
2025-08-09 07:40:33 -04:00
Elliot Speck
100005ce70 feat(checker): allow png/gif/jpg/jpeg/svg favicons as well as ico (#961)
* feat(checker): allow png/gif/jpg/jpeg/svg favicons as well as ico

* changelog: add updates to keep-internet-working.yaml

---------

Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
Co-authored-by: Xe Iaso <xe.iaso@techaro.lol>
2025-08-08 16:53:23 +00:00
Medvidek77
0a68415c2e fix(localization): Improve Czech language translation (#895)
* fix(localization): Improve Czech language translation

Improved naturalness and flow of several phrases. Corrected typos and punctuation. Completed one previously unfinished sentence.

Signed-off-by: Medvidek77 <medvidek77@centrum.cz>

* Update cs.json

Signed-off-by: Medvidek77 <medvidek77@centrum.cz>

---------

Signed-off-by: Medvidek77 <medvidek77@centrum.cz>
2025-08-08 12:50:23 -04:00
SecularSteve
b3886752a1 Added Dutch translation (#937)
* Added Dutch translation

Signed-off-by: SecularSteve <33793273+SecularSteve@users.noreply.github.com>

* Added Dutch translation

Signed-off-by: SecularSteve <33793273+SecularSteve@users.noreply.github.com>

* Added Dutch translation

Signed-off-by: SecularSteve <33793273+SecularSteve@users.noreply.github.com>

* Update lib/localization/locales/nl.json

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: SecularSteve <33793273+SecularSteve@users.noreply.github.com>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-08-08 12:49:49 -04:00
Sunniva Løvstad
0e9f831201 chore: fix capitalisation in bokmål and nynorsk (#959) 2025-08-08 12:48:07 -04:00
Xe Iaso
22ee227f20 fix(anubis): use global cookie prefix variable
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-07 13:51:18 +00:00
Jason Cameron
adda60c163 Revert "build(deps): bump the github-actions group with 2 updates (#952)" (#962) 2025-08-06 03:01:25 +00:00
dependabot[bot]
e0a15bf4dc build(deps): bump the github-actions group with 2 updates (#952)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-08-05 22:45:07 -04:00
Xe Iaso
f6481b81a2 fix(web): embed challenge ID in pass-challenge invocations (#944)
* refactor: make challenge pages return the challenge component

This means that challenge pages will return only the little bit that
actually matters, not the entire component.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(web): move Anubis version info to be implicitly in the footer

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(web): embed challenge ID into generated pages

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(lib): make tests pass

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(lib/policy/config): amend tests

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(lib): fix tests again

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
2025-08-04 18:49:19 +00:00
Xe Iaso
790bcbe773 fix(internal): silence unsolicited response log lines (#950)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-03 19:08:23 +00:00
Xe Iaso
7c80c23e90 docs: remove JSON examples from policy file docs (#945)
* docs: remove JSON examples from policy file docs

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(lib): remove mentions of botPolicies.json in the tests

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update link to challenge methods

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: unbreak links to the challenges category

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-03 18:09:26 +00:00
axell
2d8e942377 Add swedish local (#913)
* add swedish local

* added to changelog

* add to TestLocalizationService

* build(deps): bump brace-expansion from 1.1.11 to 1.1.12 in /docs (#909)

Bumps [brace-expansion](https://github.com/juliangruber/brace-expansion) from 1.1.11 to 1.1.12.
- [Release notes](https://github.com/juliangruber/brace-expansion/releases)
- [Commits](https://github.com/juliangruber/brace-expansion/compare/1.1.11...v1.1.12)

---
updated-dependencies:
- dependency-name: brace-expansion
  dependency-version: 1.1.12
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* add local (signed this time hopefully)

* Update sv.json

Co-authored-by: David Marby <david@dmarby.se>
Signed-off-by: axel <mail@axell.me>

* Update sv.json

Co-authored-by: David Marby <david@dmarby.se>
Signed-off-by: axel <mail@axell.me>

* Update localization_test.go

Co-authored-by: Jonathan Herlin <Jonte@jherlin.se>
Signed-off-by: axel <mail@axell.me>

* Update sv.json

Co-authored-by: Jonathan Herlin <Jonte@jherlin.se>
Signed-off-by: axel <mail@axell.me>

* Update sv.json

Co-authored-by: Jonathan Herlin <Jonte@jherlin.se>
Signed-off-by: axel <mail@axell.me>

* Update sv.json

Co-authored-by: Jonathan Herlin <Jonte@jherlin.se>
Signed-off-by: axel <mail@axell.me>

* Update sv.json

Co-authored-by: Jonathan Herlin <Jonte@jherlin.se>
Signed-off-by: axel <mail@axell.me>

* Update sv.json

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: axel <mail@axell.me>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: David Marby <david@dmarby.se>
Co-authored-by: Jonathan Herlin <Jonte@jherlin.se>
2025-08-02 22:17:31 -04:00
Xe Iaso
d5f01dbdb9 fix(web/sha256-browserjs): fix function name (#943)
* fix(web/sha256-browserjs): fix function name

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update changelog

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-08-02 16:05:48 +00:00
lillian-b
70bf58cc63 Add HackLab.TO to known instances (#936)
* Add HackLab.TO to known instances

Signed-off-by: lillian-b <146143737+lillian-b@users.noreply.github.com>

* fix?

Signed-off-by: lillian-b <146143737+lillian-b@users.noreply.github.com>

---------

Signed-off-by: lillian-b <146143737+lillian-b@users.noreply.github.com>
2025-08-02 15:30:34 +00:00
Xe Iaso
0dccf2e009 refactor(web): redo proof of work web worker logic (#941)
* chore(web/js): delete proof-of-work-slow.mjs

This code has served its purpose and now needs to be retired to the
great beyond. There is no replacement for this, the fast implementation
will be used instead.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(web): handle building multiple JS entrypoints and web workers

Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(web): rewrite frontend worker handling

This completely rewrites how the proof of work challenge works based on
feedback from browser engine developers and starts the process of making
the proof of work function easier to change out.

- Import @aws-crypto/sha256-js to use in Firefox as its implementation
  of WebCrypto doesn't jump directly from highly optimized browser
  internals to JIT-ed JavaScript like Chrome's seems to.
- Move the worker code to `web/js/worker/*` with each worker named after
  the hashing method and hash method implementation it uses.
- Update bench.mjs to import algorithms the new way.
- Delete video.mjs, it was part of a legacy experiment that I never had
  time to finish.
- Update LibreJS comment to add info about the use of
  @aws-crypto/sha256-js.
- Also update my email to my @techaro.lol address.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(web): don't hard dep webcrypto anymore

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(lib/policy): start the deprecation process for slow

This mostly adds a warning, but the "slow" method is in the process of
being removed. Warn admins with slog.Warn.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(web/js): allow running Anubis in non-secure contexts

Signed-off-by: Xe Iaso <me@xeiaso.net>

* Update metadata

check-spelling run (pull_request) for Xe/purge-slow

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
2025-08-02 11:27:26 -04:00
Xe Iaso
8d08de6d9c fix: allow social preview images (#934)
* feat(ogtags): when encountering opengraph URLs, add them to an allow cache

Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(lib): automatically allow any urls in the ogtags allow cache

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(changelog): remove this bit to make it its own PR

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(palemoon): add 180 second timeout

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(palemoon): actually invoke timeout

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-31 08:44:49 -04:00
Xe Iaso
1f7fcf938b fix(lib): add the ability to set a custom slog Logger (#915)
* fix(lib): add the ability to set a custom slog Logger

Closes #864

* test(lib): amend s.check usage

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
2025-07-31 08:06:35 -04:00
Emir SARI
6ae386a11a fix: polish Turkish translations (#897)
* Polish Turkish translations

* Update tr.json

Co-authored-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Emir SARI <emir_sari@icloud.com>

* Update tr.json

Co-authored-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Emir SARI <emir_sari@icloud.com>

* Update tr.json

Co-authored-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Emir SARI <emir_sari@icloud.com>

* Try to make “From” sound better

Signed-off-by: Emir SARI <emir_sari@icloud.com>

---------

Signed-off-by: Emir SARI <emir_sari@icloud.com>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-07-31 07:33:16 -04:00
Sveinn í Felli
963527fb60 Update is.json (#935)
Just one new string.

Signed-off-by: Sveinn í Felli <sv1@fellsnet.is>
2025-07-30 12:08:27 -04:00
Xe Iaso
b81c577106 chore(docs/anubis-cfg): update contact email
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-29 15:38:08 +00:00
Xe Iaso
987c1d7410 chore(go.mod): depend on at least go 1.24.2
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-29 04:06:16 +00:00
Saterfield990
826433e8be build(deps): bump the gomod group (#931)
* build(deps): bump the gomod group

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: npm run assets

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-07-28 23:47:18 -04:00
Xe Iaso
4a4031450c fix(anubis): store the challenge method in the store (#924)
* fix(lib): reduce challenge string size

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(internal): add host, method, and path to request logs

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(anubis): log when challenges explicitly fail

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(lib): make challenge validation fully deterministic

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(anubis): nuke challengeFor function

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update changelog

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-28 10:57:50 -04:00
dependabot[bot]
8feacc78fc build(deps): bump the github-actions group with 2 updates (#929)
Bumps the github-actions group with 2 updates: [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) and [github/codeql-action](https://github.com/github/codeql-action).


Updates `astral-sh/setup-uv` from 6.4.1 to 6.4.3
- [Release notes](https://github.com/astral-sh/setup-uv/releases)
- [Commits](7edac99f96...e92bafb625)

Updates `github/codeql-action` from 3.29.2 to 3.29.4
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](181d5eefc2...4e828ff8d4)

---
updated-dependencies:
- dependency-name: astral-sh/setup-uv
  dependency-version: 6.4.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
- dependency-name: github/codeql-action
  dependency-version: 3.29.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Jason Cameron  <git@jasoncameron.dev>
2025-07-27 22:47:21 -04:00
Xe Iaso
bca2e87e80 feat(default-rules): add weight to Custom-AsyncHttpClient (#914)
Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
2025-07-27 00:41:43 +00:00
Xe Iaso
a735770c93 feat(expressions): add segments function to break path into segments (#916)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-25 16:21:08 -04:00
Xe Iaso
bf42014ac3 test: add automated Pale Moon tests (#903)
* test: start work on Pale Moon tests

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(palemoon): rewrite to use ci-images

Signed-off-by: Xe Iaso <me@xeiaso.net>

* ci: enable palemoon tests

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(palemoon): add some variables

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix: disable tmate

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(palemoon): disable i386 for now

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-25 11:42:08 -04:00
dependabot[bot]
0ef3461816 build(deps): bump brace-expansion from 1.1.11 to 1.1.12 in /docs (#909)
Bumps [brace-expansion](https://github.com/juliangruber/brace-expansion) from 1.1.11 to 1.1.12.
- [Release notes](https://github.com/juliangruber/brace-expansion/releases)
- [Commits](https://github.com/juliangruber/brace-expansion/compare/1.1.11...v1.1.12)

---
updated-dependencies:
- dependency-name: brace-expansion
  dependency-version: 1.1.12
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-25 11:18:27 -04:00
Xe Iaso
7d7028d25c test(lib): add a test for the X-Forwarded-For middleware (#912)
Previously the X-Forwarded-For middleware could return two commas in a
row. This is a regression test to make sure that doesn't happen again.

Imports a patch previously exclusive to Botstopper.

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-25 10:58:41 -04:00
Xe Iaso
9affd2edf4 chore: expose thoth in lib (#911)
Imports a patch previously exclusive to Botstopper.

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-25 10:58:30 -04:00
dependabot[bot]
26b6d8a91a build(deps): bump on-headers and compression in /docs (#910)
Bumps [on-headers](https://github.com/jshttp/on-headers) and [compression](https://github.com/expressjs/compression). These dependencies needed to be updated together.

Updates `on-headers` from 1.0.2 to 1.1.0
- [Release notes](https://github.com/jshttp/on-headers/releases)
- [Changelog](https://github.com/jshttp/on-headers/blob/master/HISTORY.md)
- [Commits](https://github.com/jshttp/on-headers/compare/v1.0.2...v1.1.0)

Updates `compression` from 1.8.0 to 1.8.1
- [Release notes](https://github.com/expressjs/compression/releases)
- [Changelog](https://github.com/expressjs/compression/blob/master/HISTORY.md)
- [Commits](https://github.com/expressjs/compression/compare/1.8.0...v1.8.1)

---
updated-dependencies:
- dependency-name: on-headers
  dependency-version: 1.1.0
  dependency-type: indirect
- dependency-name: compression
  dependency-version: 1.8.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-25 10:53:28 -04:00
Xe Iaso
958992a69a chore: release v1.21.3
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-25 10:30:44 -04:00
Xe Iaso
221d9f2072 fix(web): make the try again button always go back to / (#907)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-25 14:25:04 +00:00
Xe Iaso
bb434a3351 fix(lib): add comprehensive XSS protection logic (#905)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-24 11:24:58 -04:00
Xe Iaso
45ff8f526e fix(lib): add additional validation logic for XSS protection
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-24 14:57:58 +00:00
Xe Iaso
5700512da5 chore: release v1.21.2
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-24 10:47:32 -04:00
Xe Iaso
d40e9056bc fix(lib): block XSS attacks via nonstandard URLs (#904)
* fix(lib): block XSS attacks via nonstandard URLs

This could allow an attacker to craft an Anubis pass-challenge URL that
forces a redirect to nonstandard URLs, such as the `javascript:` scheme
which executes arbitrary JavaScript code in a browser context when the
user clicks the "Try again" button.

Release-status: cut
Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-24 14:05:00 +00:00
Moonchild
21f570962c Pass forward X-Real-IP to nginx backend server (#901)
The TLS termination server sets X-Real-IP to be used by the back-end, but the back-end configuration example doesn't actually extract it so nginx logs (and back-end processing) fails to log or use the visiting IP in any way (it just states `unix:` if using a unix socket like in the example given, or the local IP if forwarded over TCP).

Adding real_ip_header to the config will fix this.

Signed-off-by: Moonchild <moonchild@palemoon.org>
2025-07-24 12:11:53 +00:00
Xe Iaso
1cb1352a44 fix(blog/v1.21.1): we avoid breaking changes
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-22 22:54:22 +00:00
Xe Iaso
a4c08687cc docs: add blogpost for announcing v1.21.1 (#886)
* docs: add release announcement post for v1.21.1

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(v1.21.1): small fixups

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(v1.21.1): spelling fixes

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(v1.21.1): clarify that Bell is trash

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

check-spelling run (pull_request) for Xe/v1.21.1-blogpost

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
2025-07-22 16:42:58 -04:00
Xe Iaso
1a19d7eee4 chore: release v1.21.1 (#887)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-22 16:32:06 -04:00
Sunniva Løvstad
25af5a232f feat(localization): Add in Bokmål and Nynorsk translations (#855)
* feat(localization): add bokmål and nynorsk translations

* feat(localization): update tests for Bokmål and Nynorsk

* docs(localization): document bokmål and nynorsk locales

* fix(locales/nb,nn): remove unicode ellipsis to make tests pass

Signed-off-by: Xe Iaso <me@xeiaso.net>

* style(localization): sort languages to make test output stable

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

check-spelling run (pull_request) for main

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
Co-authored-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <xe.iaso@techaro.lol>
2025-07-21 22:37:49 -04:00
Xe Iaso
24d2501187 feat: Russian localization for Anubis (#882)
* feat(localization): Add Russian language translation

* test(localization): ensure Russian translations are tested

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(changelog): update with Russian translation

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
Co-authored-by: MichaelAgarkov <michaelagarkov@internet.ru>
2025-07-21 23:14:51 +00:00
ZerionSeven
1dc9525427 Add Finnish localization (#863)
* add Finnish localization

* add fi to supportedLanguages

* add entry for Finnish to CHANGELOG
2025-07-21 18:56:16 -04:00
Xe Iaso
24e3746b0b fix(lib): fix challenge issuance logic (#881)
* fix(lib): fix challenge issuance logic

Fixes #869

v1.21.0 changed the core challenge flow to maintain information about
challenges on the server side instead of only doing them via stateless
idempotent generation functions and relying on details to not change.
There was a subtle bug introduced in this change: if a client has an
unknown challenge ID set in its test cookie, Anubis will clear that
cookie and then throw an HTTP 500 error.

This has been fixed by making Anubis throw a new challenge page instead.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(lib): you win this time spell check

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-21 18:53:59 -04:00
HQuest
31184ccd5f Update pt-BR.json (#878)
* Update pt-BR.json

While current version is good enough as a machine translation, it is not natural enough for what a reader would like to read while browsing sites made by native devs - including the subtle nuances from the original English version, now incorporated to the translation instead of plain, literal translations with questionable meanings.

Signed-off-by: HQuest <hquest@gmail.com>

* fix(locales/pt-BR): anubis is from Canada

CA is the ISO country code for Canada, but also the US state code for California.

Co-authored-by: Victor Fernandes  <victorvalenca@gmail.com>
Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: HQuest <hquest@gmail.com>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Victor Fernandes <victorvalenca@gmail.com>
2025-07-21 22:21:47 +00:00
Xe Iaso
e69fadddf1 fix(web/fast): remove event loop thrashing (#880)
Fixes #877

Continued from #879, event loop thrashing can cause stack space
exhaustion on ia32 systems. Previously this would thrash the event loop
in Firefox and Firefox derived browsers such as Pale Moon. I suspect
that this is the ultimate root cause of the bizarre irreproducible bugs
that Pale Moon (and maybe Cromite) users have been reporting since at
least #87 was merged.

The root cause is an invalid boolean statement:

```js
// send a progress update every 1024 iterations. since each thread checks
// separate values, one simple way to do this is by bit masking the
// nonce for multiples of 1024. unfortunately, if the number of threads
// is not prime, only some of the threads will be sending the status
// update and they will get behind the others. this is slightly more
// complicated but ensures an even distribution between threads.
if (
  (nonce > oldNonce) | 1023 && // we've wrapped past 1024
  (nonce >> 10) % threads === threadId // and it's our turn
) {
  postMessage(nonce);
}
```

The logic here looks fine but is subtly wrong as was reported in #877
by a user in the Pale Moon community. Consider the following scenario:

`nonce` is a counter that increments by the worker count every loop.
This is intended to spread the load between CPU cores as such:

| Iteration | Worker ID | Nonce |
| :-------- | :-------- | :---- |
| 1         | 0         | 0     |
| 1         | 1         | 1     |
| 2         | 0         | 2     |
| 3         | 1         | 3     |

And so on.

The incorrect part of this is the boolean logic, specifically the part
with the bitwise or `|`. I think the intent was to use a logical or
(`||`), but this had the effect of making the `postMessage` handler fire
on every iteration. The intent of this snippet (as the comment clearly
indicates) is to make sure that the main event loop is only updated with
the worker status every 1024 iterations per worker. This had the
opposite effect, causing a lot of messages to be sent from workers to
the parent JavaScript context.

This is bad for the event loop.

Instead, I have ripped out that statement and replaced it with a much
simpler increment only counter that fires every 1024 iterations.
Additionally, only the first thread communicates back to the parent
process. This does mean that in theory the other workers could be ahead
of the first thread (posting a message out of a worker has a nonzero
cost), but in practice I don't think this will be as much of an issue as
the current behaviour is.

The root cause of the stack exhaustion is likely the pressure caused by
all of the postMessage futures piling up. Maybe the larger stack size in
64 bit environments is causing this to be fine there, maybe it's some
combination of newer hardware in 64 bit systems making this not be as
much of a problem due to it being able to handle events fast enough to
keep up with the pressure.

Either way, thanks much to @wolfbeast and the Pale Moon community for
finding this. This will make Anubis faster for everyone!

Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
2025-07-21 18:05:24 -04:00
Xe Iaso
5e8ebaeb5d fix(web): amend future leak on proof of work solution (#879)
Possible fix for #877

In some cases, the parallel solution finder in Anubis could cause
all of the worker promises to leak due to the fact the promises
were being improperly terminated. A recursion bomb happens in the
following scenario:

1. A worker sends a message indicating it found a solution to the proof
   of work challenge.
2. The `onmessage` handler for that worker calls `terminate()`
3. Inside `terminate()`, the parent process loops through all other
   workers and calls `w.terminate()` on them.
4. It's possible that terminating a worker could lead to the `onerror`
   event handler.
5. This would create a recursive loop of `onmessage` -> `terminate` ->
   `onerror` -> `terminate` -> `onerror` and so on.

This infinite recursion quickly consumes all available stack space, but
this has never been noticed in development because all of my computers
have at least 64Gi of ram provisioned to them under the axiom paying for
more ram is cheaper than paying in my time spent having to work around
not having enough ram. Additionally, ia32 has a smaller base stack size,
which means that they will run into this issue much sooner than users on
other CPU architectures will.

The fix adds a boolean `settled` flag to prevent termination from
running more than once.

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-21 17:50:31 -04:00
searingmoonlight
3e1aaa6273 Remove duplicated string in Filipino language file (#875)
Signed-off-by: searingmoonlight <kitakita@disroot.org>
2025-07-20 23:17:48 -04:00
Jason Cameron
dce7ed2405 Revert "build(deps): bump the gomod group with 6 updates (#873)" (#874) 2025-07-21 01:22:40 +00:00
dependabot[bot]
03758405d3 build(deps): bump the gomod group with 6 updates (#873)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-07-21 00:57:00 +00:00
dependabot[bot]
eb78ccc30c build(deps-dev): bump the npm group with 3 updates (#872)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-07-20 20:56:02 -04:00
dependabot[bot]
4156f84020 build(deps): bump the github-actions group with 2 updates (#871)
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-07-20 20:54:50 -04:00
Xe Iaso
76dcd21582 feat(expressions): add missingHeader function to bot environment (#870)
Also add tests to the bot expressions custom functions.
2025-07-20 19:09:29 -04:00
Josef Moravec
6b639cd911 feat(localization): Add Czech language translation (#849)
* feat(localization): Add Czech language translation

* feat(localization): Add record to CHANGELOG.md
2025-07-18 17:23:19 -04:00
Josef Moravec
a0aba2d74a chore: spelling
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-18 21:22:50 +00:00
hankskyjames777
b485499125 fix untranslated string (#850)
Signed-off-by: hankskyjames777 <54805804+hankskyjames777@users.noreply.github.com>
2025-07-18 13:52:38 -04:00
Nicholas Sherlock
300720f030 fix broken bbolt database cleanup process (#848) (#848)
Closes #820, was broken since #761
2025-07-18 13:51:32 -04:00
Xe Iaso
d6298adc6d chore: fix name of backoff-retry, expose in devcontainer
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-18 17:51:13 +00:00
Xe Iaso
1a9d8fb0cf test(ssh-ci): deflake SSH CI with exponential backoff (#859)
* test(ssh-ci): deflake SSH CI with exponential backoff

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(ssh-ci): re-disable in PRs

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-18 17:46:49 +00:00
Xe Iaso
36e25ff5f3 test: add i18n smoke test (#858)
* test: add i18n smoke test

Makes sure that all of the languages that Anubis supports show up when
the challenge page is sent to a client.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(i18n): build anubis so that the smoke test doesn't backoff timeout

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-18 13:30:46 -04:00
Emily Rowlands
c59b7179c3 fix(cmd/anubis): add signal handling to metrics server (#856)
This fixes a bug that was introduced in 68b653b0, in which the call
to metricsServer was passed a plain context.Background without
signal handling.

This commit adds back in the signal handling for the metrics server,
as well as for the Thoth client and storage backend.

Closes: #853

Signed-off-by: Emily Rowlands <emily@erowl.net>
2025-07-18 13:56:52 +00:00
Lothar Serra Mari
59515ed669 docs(known-instances): update list of known instances (#847)
Signed-off-by: Lothar Serra Mari <mail@serra.me>
2025-07-17 07:37:37 -04:00
Xe Iaso
4d6b578f93 chore: release v1.21.0 (#844)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-16 21:21:20 -04:00
Xe Iaso
2915c1d209 fix(docs/manifest): k8s typo
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-16 20:35:33 -04:00
Xe Iaso
68b653b099 feat(anubis): add /healthz route to metrics server (#843)
* feat(anubis): add /healthz route to metrics server

Also add health check test for Docker Compose and update documentation
for health checking Anubis with Docker Compose.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-17 00:31:18 +00:00
CXM
509a4f3ce8 fix(localization): fix missing string in template (#835)
* fix(localization): fix missing string in template

* chore: temp place locale
2025-07-16 19:40:27 -04:00
Marcel Bischoff
5c4d8480e6 Add services folder, add Uptime Robot policy definition (#838)
Uptime Robot is a commonly used service for tracking service
interruptions. Additional policy definitions may be beneficial for
services that do publish their IP addresses in use. The list is
additionally aggregated to slightly shorten it.

Signed-off-by: Marcel Bischoff <marcel@herrbischoff.com>
2025-07-16 09:17:48 -04:00
Xe Iaso
132b2ed853 feat(cmd/anubis): capture ja4h fingerprints (#834)
This is not used yet, but it will be part of a larger strategy around
adding/removing weight based on JA4H (and other) fingerprint matches
with Thoth.

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-14 23:31:33 -04:00
Xe Iaso
d28991ce8d fix: race conditions, cookie logic, and the try again button (#833)
* fix(lib): fix race condition when rendering multiple challenge pages at once

Closes #832

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(web): make try again button work

Looks like the intent of this was "try the solution again". This fix
makes the client try the challenge again.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(web): don't block a user if they have an invalid challenge cookie

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-15 00:54:08 +00:00
Xe Iaso
0fd4bb81b8 ci(docs): fix docs image tag names in the right file
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-14 21:28:22 +00:00
Xe Iaso
603c68fd54 ci(docs): fix docs image tag names
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-14 21:25:36 +00:00
Xe Iaso
c8f2eb1185 ci(docs): make a new docker image for the docs per commit sha (#831)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-14 17:23:23 -04:00
Xe Iaso
f6b94dca98 test: add git push smoke test (#830)
* test: add git push smoke test

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(git-push): add git config commands

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(git-push): set upstream

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(git-push): set remote branch name

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-14 10:42:35 -04:00
Xe Iaso
6d8b98eb3d Revert "test: add git push smoke test"
This reverts commit b9d8275234.
2025-07-14 10:26:47 -04:00
Xe Iaso
b9d8275234 test: add git push smoke test
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-14 10:25:40 -04:00
Xe Iaso
c2cc1df172 test: add smoke test for git clone (#828)
* test: add smoke test for git clone

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: exempt tests from spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test: rename this to git-clone

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test: pin ko setup reference

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test: don't persist credentials

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test: terminating newline

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-14 14:01:03 +00:00
Xe Iaso
735b2ceb14 fix(default-config): disable system load check by default (#827)
This was causing issues with git clone against highly loaded servers. I
thought that this would be pretty innocuous, but I guess I was wrong.
Oops!

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-14 13:06:56 +00:00
dependabot[bot]
2cb57fc247 build(deps-dev): bump esbuild from 0.25.5 to 0.25.6 in the npm group (#825)
Bumps the npm group with 1 update: [esbuild](https://github.com/evanw/esbuild).


Updates `esbuild` from 0.25.5 to 0.25.6
- [Release notes](https://github.com/evanw/esbuild/releases)
- [Changelog](https://github.com/evanw/esbuild/blob/main/CHANGELOG.md)
- [Commits](https://github.com/evanw/esbuild/compare/v0.25.5...v0.25.6)

---
updated-dependencies:
- dependency-name: esbuild
  dependency-version: 0.25.6
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: npm
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-07-13 21:28:29 -04:00
dependabot[bot]
61ce581f36 build(deps): bump the gomod group with 3 updates (#823)
Bumps the gomod group with 3 updates: [github.com/gaissmai/bart](https://github.com/gaissmai/bart), [golang.org/x/net](https://github.com/golang/net) and [golang.org/x/text](https://github.com/golang/text).


Updates `github.com/gaissmai/bart` from 0.20.4 to 0.20.5
- [Release notes](https://github.com/gaissmai/bart/releases)
- [Commits](https://github.com/gaissmai/bart/compare/v0.20.4...v0.20.5)

Updates `golang.org/x/net` from 0.41.0 to 0.42.0
- [Commits](https://github.com/golang/net/compare/v0.41.0...v0.42.0)

Updates `golang.org/x/text` from 0.26.0 to 0.27.0
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.26.0...v0.27.0)

---
updated-dependencies:
- dependency-name: github.com/gaissmai/bart
  dependency-version: 0.20.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: gomod
- dependency-name: golang.org/x/net
  dependency-version: 0.42.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: gomod
- dependency-name: golang.org/x/text
  dependency-version: 0.27.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: gomod
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-07-13 20:41:26 -04:00
Xe Iaso
3f6750ac7d chore(sponsors): add fabulous systems
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-12 23:08:30 +00:00
Xe Iaso
25d75b352a chore: release v1.21.0-pre3 2025-07-12 17:29:18 -04:00
Xe Iaso
de17823bc7 chore: release v1.21.0-pre2 (#816)
* chore: release v1.21.0-pre2

* chore: disable automated stable package builds for now

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-12 16:57:08 -04:00
Xe Iaso
29622e605d chore(docs): add link to status page in the footer (#814)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-12 13:32:24 -04:00
Jesús Martínez Novo
9fa1795db7 fix(index.templ) centered-div class usage typo (#812)
* Fix centered-div class usage in index.templ

There was a redundant <center> tag around a div with centered-div class. Well, not so redundant because a typo in the class attribute caused it to not apply.

Removed another <center> tag and replaced by a div.centered-div for consistency.

Signed-off-by: Jesús Martínez Novo <martineznovo@gmail.com>

* Fix centered-div class usage in index.templ (continuation)

Template needed to be compiled into go code...

---------

Signed-off-by: Jesús Martínez Novo <martineznovo@gmail.com>
2025-07-11 14:59:17 -04:00
Maxime Louet
fbf69680f5 chore(docs): fix typo in configuration/expressions (#811)
This example code block was missing a closing single quote.

Signed-off-by: Maxime Louet <maxime@saumon.io>
2025-07-11 13:30:27 +00:00
Lothar Serra Mari
c74de19532 docs(known-instances): add rpmfusion.org and wiki.freepascal.org to known instances (#807)
* docs(known-instances): add rpmfusion.org to known instances

Signed-off-by: Lothar Serra Mari <mail@serra.me>

* docs(known-instances): add wiki.freepascal.org to known instances

Signed-off-by: Lothar Serra Mari <mail@serra.me>

---------

Signed-off-by: Lothar Serra Mari <mail@serra.me>
2025-07-10 14:38:17 -04:00
Evgeni Golov
6dc726013a correct gitea.botPolicies extension to be yaml, not json (#800)
* correct gitea.botPolicies extension to be yaml, not json

while Anubis probably doesn't care about the extension, and would parse a JSON file just fine too, the rest of the page talks about `gitea.botPolicies.yaml`, so let's be consistent

Signed-off-by: Evgeni Golov <evgeni@golov.de>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Evgeni Golov <evgeni@golov.de>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-07-10 17:10:47 +00:00
Lothar Serra Mari
02304e8f3c docs(known-instances): update list of known instances (#801)
* docs(known-instances): add clew.se to known instances

Signed-off-by: Lothar Serra Mari <mail@serra.me>

* docs(known-instances): add tumfatig.net to known instances

Signed-off-by: Lothar Serra Mari <mail@serra.me>

---------

Signed-off-by: Lothar Serra Mari <mail@serra.me>
2025-07-10 12:56:46 -04:00
Xe Iaso
607c9791d8 chore(docs): add fly.toml file as a hail mary
Ref #799

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-10 06:05:17 -04:00
Xe Iaso
6b67be86a1 chore(docs/manifest): branded 404 page
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-09 17:06:23 -04:00
Xe Iaso
e02f017153 chore(docs/manifest): remove fastcgi from the nginx config
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-09 17:01:42 -04:00
Xe Iaso
66b39f64af docs: update CHANGELOG for language changes (#793)
Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
2025-07-09 20:58:08 +00:00
Xe Iaso
944fd25924 chore: use nginx-micro to make the docs image 13 MB (#796)
* chore: use nginx-micro to make the docs image 13 MB

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: update spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-09 20:54:44 +00:00
Xe Iaso
fa3fbfb0a5 feat(blog): incident report for TI-20250709-0001 (#795)
* feat(blog): incident report for TI-20250709-0001

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

check-spelling run (pull_request) for Xe/TI-20250709-0001

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

* fix(blog/TI-20250709-0001): add TecharoHQ/anubis#794

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(blog/TI-20250709-0001): amend grammar

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
2025-07-09 14:56:12 +00:00
Xe Iaso
3c739c1305 fix(internal/thoth): don't block Anubis starting if healthcheck fails (#794)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-09 13:31:28 +00:00
searingmoonlight
cc56baa5c7 feat(localization): Add Filipino language (#775)
* feat(localization): Add Filipino language

* Add tests

* remove duplicated string

* Minor fixes in translation

Signed-off-by: searingmoonlight <scripterrookie12@gmail.com>

---------

Signed-off-by: searingmoonlight <scripterrookie12@gmail.com>
2025-07-09 12:07:26 +00:00
giomba
053d29e0b6 feat(localization): Add Italian language translation (#778)
Signed-off-by: Giovan Battista Rolandi <giomba@glgprograms.it>
2025-07-09 11:49:19 +00:00
Sveinn í Felli
a668095c22 Create is.json (#780)
* Create is.json

Adding Icelandic translation

Signed-off-by: Sveinn í Felli <sv1@fellsnet.is>

* fix(localization): add Icelandic to manifest.json

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Sveinn í Felli <sv1@fellsnet.is>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-07-09 11:29:40 +00:00
Henri Vasserman
1c4a1aec4a feat(i18n): add Estonian locale (#783)
* feat(i18n): add et locale

* chore: update changelog

* wording

* "feature"
2025-07-09 11:18:11 +00:00
dai
5b8b6d1c94 feat(localization): add Japanese language translation (#772)
* feat(localization): add Japanese language translation

* fix(locales): add Japanese to the manifest

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(locales): fix manifest

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
Co-authored-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <xe.iaso@techaro.lol>
2025-07-09 07:08:34 -04:00
Joe Brockmeier
0cb6ef76e1 Update apache.mdx (#784)
Was missing the opening stanza to enable mod_proxy for Apache.

Signed-off-by: Joe Brockmeier <jzb@zonker.net>
2025-07-09 07:08:19 -04:00
Henri Vasserman
a900e98b8b fix(localization): HTML language header and forced-language (#787)
* fix: HTML language header and forced-language

* style(changelog): added a couple headers

* add test
2025-07-09 07:04:42 -04:00
Mahid Sheikh
e79cd93b61 docs(installation): Clarify information about private keys and multiple instances (#788)
Signed-off-by: Mahid Sheikh <mahid@standingpad.org>
2025-07-09 10:54:36 +00:00
CXM
d17fc6a174 feat(localization): add Simplified Chinese (#774) 2025-07-09 06:53:08 -04:00
Lothar Serra Mari
95768cb70f docs(known-instances): update list of known instances (#776)
* docs(known-instances): update list of known instances

Signed-off-by: Lothar Serra Mari <mail@serra.me>

* Update metadata

check-spelling run (pull_request) for probably-daily-new-spotted-instances-update

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

---------

Signed-off-by: Lothar Serra Mari <mail@serra.me>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
Co-authored-by: Lothar Serra Mari <lotharsm@users.noreply.github.com>
2025-07-07 22:01:14 +00:00
mihugo
ca61b8a05f Update apache.mdx replace nginx with Apache in place (#779)
Signed-off-by: mihugo <mike.github@m3h.com>
2025-07-07 17:17:24 -04:00
dependabot[bot]
1ea1157cd7 build(deps): bump github.com/shirou/gopsutil/v4 in the gomod group (#771)
Bumps the gomod group with 1 update: [github.com/shirou/gopsutil/v4](https://github.com/shirou/gopsutil).


Updates `github.com/shirou/gopsutil/v4` from 4.25.1 to 4.25.6
- [Release notes](https://github.com/shirou/gopsutil/releases)
- [Commits](https://github.com/shirou/gopsutil/compare/v4.25.1...v4.25.6)

---
updated-dependencies:
- dependency-name: github.com/shirou/gopsutil/v4
  dependency-version: 4.25.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: gomod
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-07-06 21:21:14 -04:00
dependabot[bot]
44ae5f2e2b build(deps): bump the github-actions group with 2 updates (#770)
Bumps the github-actions group with 2 updates: [dominikh/staticcheck-action](https://github.com/dominikh/staticcheck-action) and [github/codeql-action](https://github.com/github/codeql-action).


Updates `dominikh/staticcheck-action` from 1.3.1 to 1.4.0
- [Release notes](https://github.com/dominikh/staticcheck-action/releases)
- [Changelog](https://github.com/dominikh/staticcheck-action/blob/master/CHANGES.md)
- [Commits](fe1dd0c365...024238d289)

Updates `github/codeql-action` from 3.29.1 to 3.29.2
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](39edc492db...181d5eefc2)

---
updated-dependencies:
- dependency-name: dominikh/staticcheck-action
  dependency-version: 1.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions
- dependency-name: github/codeql-action
  dependency-version: 3.29.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Jason Cameron <git@jasoncameron.dev>
2025-07-06 20:51:24 -04:00
Xe Iaso
ea2e76c6ee chore: tag version 1.21.0-pre1
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-06 19:35:06 -04:00
Xe Iaso
4ea0add50d feat(lib/policy/expressions): add system load average to bot expression inputs (#766)
* feat(lib/policy/expressions): add system load average to bot expression inputs

This lets Anubis dynamically react to system load in order to
increase and decrease the required level of scrutiny. High load? More
scrutiny required. Low load? Less scrutiny required.

* docs: spell system correctly

Signed-off-by: Xe Iaso <me@xeiaso.net>

* Update metadata

check-spelling run (pull_request) for Xe/load-average

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

* fix(default-config): don't enable low load average feature by default

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
2025-07-06 20:13:50 +00:00
XLion
289c802a0b feat(localization): Add Traditional Chinese language translation (#759)
* Add translation for Traditional Chinese

* Add translation for Traditional Chinese: test

* Add translation for Traditional Chinese: Add PR number to CHANGELOG

* Add translation for Traditional Chinese: test: remove empty lines

* Add translation for Traditional Chinese: test: remove empty lines
2025-07-06 19:59:00 +00:00
Lothar Serra Mari
543b942be1 docs(known-instances): update list of known instances (#767)
Signed-off-by: Lothar Serra Mari <mail@serra.me>
2025-07-06 19:45:18 +00:00
Lothar Serra Mari
edbe1dcfd6 feat(localization): Update German language translation (#764) 2025-07-06 11:25:05 -04:00
Xe Iaso
94db16c0df docs: add emma.pet sponsor
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-06 14:25:27 +00:00
Xe Iaso
c2f46907a1 docs: remove proof of work branding (#763)
* docs(index): start cleanup, remove proof of work from core branding

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(index): rewrite copy, add CELPHASE illustrations

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-06 02:34:52 +00:00
Xe Iaso
6fa5b8e4e0 fix(lib/store/bbolt): run cleanup every hour instead of every 5 minutes (#762)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-06 01:19:44 +00:00
Xe Iaso
f98750b038 fix(lib/store/bbolt): use a multi-bucket flow instead of a single bucket flow (#761)
* fix(lib/store/bbolt): use a multi-bucket flow instead of a single bucket flow

Signed-off-by: Xe Iaso <me@xeiaso.net>

* Update metadata

check-spelling run (push) for Xe/optimize-bbolt

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

* fix(lib/store/bbolt): gracefully handle the obsolete anubis bucket in cleanup

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
2025-07-06 01:16:11 +00:00
Xe Iaso
7d0c58d1a8 fix: make ogtags and dnsbl use the Store instead of memory (#760)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-05 16:17:46 -04:00
Xe Iaso
e870ede120 docs(known-instances): add git.aya.so
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-05 16:41:36 +00:00
Xe Iaso
592d1e3dfc docs(known-instances): add Pluralpedia
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-05 16:40:32 +00:00
Xe Iaso
f6254b4b98 docs(installation): clarify BASE_PREFIX matches the /.within.website endpoints
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-05 13:47:02 +00:00
Lothar Serra Mari
d19026d693 docs(known-instances): Add Duke University, coinhoards.org (and myself) to known instances (#757)
* docs(known-instances): add Duke University to known instances

Signed-off-by: Lothar Serra Mari <mail@serra.me>

* docs(known-instances): add fabulous.systems to known instances

Signed-off-by: Lothar Serra Mari <mail@serra.me>

* docs(known-instances): add coinhoards.org to known instances

Signed-off-by: Lothar Serra Mari <mail@serra.me>

* chore(spelling): exempt the known instances page

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Lothar Serra Mari <mail@serra.me>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-07-05 08:29:44 -04:00
Xe Iaso
7b72c790ab chore: spelling
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-05 12:29:19 +00:00
Xe Iaso
719a1409ca test(lib/store/bbolt): disable this test case for now
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-05 04:56:19 +00:00
Xe Iaso
890f21bf47 chore(devcontainer): move playwright to its own devcontainer service (#756)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-05 00:53:45 -04:00
Xe Iaso
93bfe910d8 docs(user/faq): clarify Anubis not being a cryptocurrency miner
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-04 22:49:39 +00:00
Xe Iaso
19d8de784b chore(docs/manifest): enable bbolt in an emptyDir
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-04 21:07:37 +00:00
Xe Iaso
dff2176beb feat(lib): use new challenge creation flow (#749)
* feat(decaymap): add Delete method

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(lib/challenge): refactor Validate to take ValidateInput

Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(lib): implement store interface

Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(lib/store): all metapackage to import all store implementations

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(policy): import all store backends

Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(lib): use new challenge creation flow

Previously Anubis constructed challenge strings from request metadata.
This was a good idea in spirit, but has turned out to be a very bad idea
in practice. This new flow reuses the Store facility to dynamically
create challenge values with completely random data.

This is a fairly big rewrite of how Anubis processes challenges. Right
now it defaults to using the in-memory storage backend, but on-disk
(boltdb) and valkey-based adaptors will come soon.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(decaymap): fix documentation typo

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(lib): fix SA4004

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(lib/store): make generic storage interface test adaptor

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(decaymap): invert locking process for Delete

Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(lib/store): add bbolt store implementation

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: go mod tidy

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(devcontainer): adapt to docker compose, add valkey service

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(lib): make challenges live for 30 minutes by default

Signed-off-by: Xe Iaso <me@xeiaso.net>

* feat(lib/store): implement valkey backend

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(lib/store/valkey): disable tests if not using docker

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(lib/policy/config): ensure valkey stores can be loaded

Signed-off-by: Xe Iaso <me@xeiaso.net>

* Update metadata

check-spelling run (pull_request) for Xe/store-interface

Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
on-behalf-of: @check-spelling <check-spelling-bot@check-spelling.dev>

* chore(devcontainer): remove port forwards because vs code handles that for you

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(default-config): add a nudge to the storage backends section of the docs

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(docs): listen on 0.0.0.0 for dev container support

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(policy): document storage backends

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: update CHANGELOG and internal links

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(admin/policies): don't start a sentence with as

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: fixes found in review

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Signed-off-by: check-spelling-bot <check-spelling-bot@users.noreply.github.com>
2025-07-04 20:42:28 +00:00
SGHFan
506d8817d5 Update known-instances.md (#755)
Add eBird to instance list

Signed-off-by: SGHFan <62897482+SGHFan@users.noreply.github.com>
2025-07-04 19:54:26 +00:00
Duru Can Celasun
d0fae02d05 feat(localization): Add Turkish language translation (#751)
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-07-04 00:50:16 -04:00
Xe Iaso
845095c3f6 chore(robots.txt): don't block CCBot
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-04 00:23:21 +00:00
Xe Iaso
2f1e78cc6c chore(docs/manifest): allow common crawl to test with the team
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-04 00:22:43 +00:00
Xe Iaso
7c0996448a chore(default-config): allowlist common crawl (#753)
This may seem strange, but allowlisting common crawl means that scrapers
have less incentive to scrape because they can just grab the data from
common crawl instead of scraping it again.
2025-07-04 00:10:45 +00:00
Xe Iaso
d7a758f805 docs: add BotStopper docs from the git repo (#752)
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-03 23:09:45 +00:00
Martin
c121896f9c feat(localization): Add German language translation (#741)
* Add german translation

* Adjust german localization

* Adjust js_finished_reading in german localization

* Mention this change in CHANGELOG.md

* Add test for German localization

* Update lib/localization/locales/de.json

Co-authored-by: Florian Lehner <florianl@users.noreply.github.com>
Signed-off-by: Martin <31348196+Earl0fPudding@users.noreply.github.com>

* Remove duplicate "leider" in lib/localization/locales/de.json

Co-authored-by: Florian Lehner <florianl@users.noreply.github.com>
Signed-off-by: Martin <31348196+Earl0fPudding@users.noreply.github.com>

* Update lib/localization/locales/de.json

Co-authored-by: Florian Lehner <florianl@users.noreply.github.com>
Signed-off-by: Martin <31348196+Earl0fPudding@users.noreply.github.com>

* Update lib/localization/locales/de.json

Co-authored-by: Florian Lehner <florianl@users.noreply.github.com>
Signed-off-by: Martin <31348196+Earl0fPudding@users.noreply.github.com>

---------

Signed-off-by: Martin <31348196+Earl0fPudding@users.noreply.github.com>
Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Florian Lehner <florianl@users.noreply.github.com>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-07-03 10:48:17 +00:00
Xe Iaso
888b7d6e77 fix(run/anubis@.service): unique runtimedir per instance (#750)
* fix(run/anubis@.service): unique runtimedir per instance

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-03 10:29:05 +00:00
Martin
0e43138324 feat(localization): Add option for forcing a language (#742)
* Add forcesLanguage option

* Change comments for forced language option

* Add changes to CHANGELOG.md

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-07-02 05:33:00 +00:00
Xe Iaso
c981c23f7e chore: npm run generate
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-02 05:25:10 +00:00
Xe Iaso
9f0c5e974e fix(web/main): remove the success interstitial (#745)
I'm gonna be totally honest here, I'm still not sure why #564 is still
an issue. This is really confusing and I'm going to totally throw out
how Anubis issues challenges and redo it with Valkey (#201, #622).

The problem seems to be that I assume that the makeChallenge function in
package lib is idempotent for the same client. I have no idea why this
would be inconsistent, but for some reason it is and I'm just at a loss
for words as to why this is happening.

This stops the bleeding by improving the UX as a stopgap.

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-07-01 23:44:38 +00:00
Victor Fernandes
292c470ada Set cookies to have the Secure flag default to true (#739)
* Set Cookies to use the Secure Flag and default SameSite to None

* Add secure flag test

* Updated changelog and documentation for secure flag option
2025-06-30 14:58:31 -04:00
Rafael Fontenelle
12453fdc00 Fix translations in pt-BR.json (#729)
Signed-off-by: Rafael Fontenelle <rffontenelle@users.noreply.github.com>
2025-06-30 14:14:24 -04:00
Xe Iaso
f5b3bf81bc feat: dev container support (#734)
* chore: add devcontainer for Anubis

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(devcontainer): ensure user can write to $HOME

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(devcontainer): forward ports, add launch config

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(devcontainer): add playwright deps

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs: document devcontainer usage

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* ci(devcontainer): fix action references

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(devcontainer): fix ko on arm64

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-06-29 23:41:29 -04:00
dependabot[bot]
1820649987 build(deps): bump the gomod group with 2 updates (#736)
---
updated-dependencies:
- dependency-name: github.com/a-h/templ
  dependency-version: 0.3.906
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: gomod
- dependency-name: sigs.k8s.io/yaml
  dependency-version: 1.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: gomod
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-29 21:32:56 -04:00
dependabot[bot]
14eeeb56d6 build(deps): bump the github-actions group with 2 updates (#735)
Bumps the github-actions group with 2 updates: [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) and [github/codeql-action](https://github.com/github/codeql-action).


Updates `astral-sh/setup-uv` from 6.3.0 to 6.3.1
- [Release notes](https://github.com/astral-sh/setup-uv/releases)
- [Commits](445689ea25...bd01e18f51)

Updates `github/codeql-action` from 3.29.0 to 3.29.1
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](ce28f5bb42...39edc492db)

---
updated-dependencies:
- dependency-name: astral-sh/setup-uv
  dependency-version: 6.3.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
- dependency-name: github/codeql-action
  dependency-version: 3.29.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-29 20:53:14 -04:00
Martin
d9e0fbe905 feat(cmd): Add custom cookie prefix (#732)
* Add cookie prefix option

* Add explaination comment for TestCookieName

* Rename TestCookieName value from cookie-test-if-you-block-this-anubis-wont-work to cookie-verification

* Add changes to CHANGELOG.md

* Add values to CookieName and TestCookieName in anubis.go required for testcases
2025-06-29 20:03:09 -04:00
Martin
6aa17532da fix: Dynamic cookie domain not working (#731)
* Fix cookieDynamicDomain option not being set in Options struct

* Fix using wrong cookie name when using dynamic cookie domains

* Adjust testcases for new cookie option structs

* Add known words to expect.txt and change typo in Zombocom

* Cleanup expect.txt

* Add changes to changelog

* Bump versions of grpc and apimachinery

* Fix testcases and add additional condition for dynamic cookie domain
2025-06-29 15:38:55 -04:00
Xe Iaso
b1edf84a7c docs(blog/v1.20.0): i am smart
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-06-27 21:10:02 -04:00
Xe Iaso
d47a3406db docs(blog/v1.20.0): how did CI not catch this?
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-06-27 19:55:58 -04:00
Xe Iaso
ff5991b5cf docs(blog/v1.20.0): add cover image
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-06-27 19:20:12 -04:00
Xe Iaso
19f78f37ad docs(blog/v1.20.0): fix typo
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-06-27 18:59:07 -04:00
Xe Iaso
b0b0a5c08a feat(blog): v1.20.0 announcement post
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-06-27 18:56:09 -04:00
Rafael Fontenelle
261306dc63 Add Brazilian Portuguese translation (#726)
* Create pt-br.json

Signed-off-by: Rafael Fontenelle <rffontenelle@users.noreply.github.com>

* Enable pt-br locale

Signed-off-by: Rafael Fontenelle <rffontenelle@users.noreply.github.com>

* Fix language code

Signed-off-by: Rafael Fontenelle <rffontenelle@users.noreply.github.com>

* Update and rename pt-br.json to pt-BR.json

Signed-off-by: Rafael Fontenelle <rffontenelle@users.noreply.github.com>

* Update lib/localization/locales/pt-BR.json

Co-authored-by: Victor Fernandes  <victorvalenca@gmail.com>
Signed-off-by: Rafael Fontenelle <rffontenelle@users.noreply.github.com>

---------

Signed-off-by: Rafael Fontenelle <rffontenelle@users.noreply.github.com>
Co-authored-by: Victor Fernandes <victorvalenca@gmail.com>
2025-06-27 20:56:56 +00:00
CXM
3520421757 fix: determine bind network from bind address (#714)
* fix: determine bind network from bind address

* docs: update CHANGELOG

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-06-27 17:57:37 +00:00
Laurent Laffont
ad5430612f feat: implement localization system (#716)
* lib/localization: implement localization system

Locale files are placed in lib/localization/locales/. If you add a
locale, update manifest.json with available locales.

* Exclude locales from check spelling

* tests(lib/localization): add comprehensive translations test

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix(challenge/metarefresh): enable localization

Signed-off-by: Xe Iaso <me@xeiaso.net>

* fix: use simple syntax for localization in templ

Also localize CELPHASE into French according to the wishes of the
artist.

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore:(js): fix forbidden patterns

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: add goi18n to tools

Signed-off-by: Xe Iaso <me@xeiaso.net>

* test(lib/localization): dynamically determine the list of supported languages

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-06-27 17:49:15 +00:00
Xe Iaso
c2423d0688 chore: release v1.20.0
Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-06-27 12:06:22 -04:00
Xe Iaso
a1b7d2ccda feat: dynamic cookie domains (#722)
* feat: dynamic cookie domains

Replaces #685

I was having weird testing issues when trying to merge #685, so I
rewrote it from scratch to be a lot more minimal.

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2025-06-26 12:11:59 +00:00
msporleder
7cf6ac5de6 remove incorrect module mentions (#687)
mod_proxy_html is for modifying html content in response bodies. The example configs are using mod_proxy_http.

https://httpd.apache.org/docs/2.4/mod/mod_proxy_html.html
vs
https://httpd.apache.org/docs/2.4/mod/mod_proxy_http.html

And anyway mod_proxy + mod_proxy_http should already be installed on almost all systems.

Signed-off-by: msporleder <msporleder@gmail.com>
2025-06-26 10:47:30 +00:00
Martin
59f5b07281 feat: Add option to use HS512 secret for JWT instead of ED25519 (#680)
* Add functionality for HS512 JWT tokens

* Add HS512_SECRET to installation docs

* Update CHANGELOG.md regarding HS512

* Move HS512_SECRET to advenced section in docs

* Move token Keyfunc logic to Server function

* Add Keyfunc to spelling

* chore: spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
Co-authored-by: Martin Weidenauer <mweidenauer@nanx0as46153.anx.local>
Co-authored-by: Xe Iaso <me@xeiaso.net>
2025-06-26 10:06:44 +00:00
Jason Cameron
1562f88c35 chore: Remove unused/dead code (#703)
* chore(xess): remove unused xess templates

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* chore(checker): remove unused staticHashChecker implementation

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* feat: add pinact and deadcode to go tools (pinact is used for the gha pinning)

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* chore: update Docker and kubectl actions to latest versions

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* chore: update Homebrew action from master to main in workflow files

See  df537ec97f

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* chore: remove unused go-colorable and tools dependencies from go.sum

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* chore: update postcss-import and other dependencies to latest versions

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* chore: update Docusaurus dependencies to version 3.8.1

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

* chore: downgrade playwright and playwright-core to version 1.52.0

Signed-off-by: Jason Cameron <git@jasoncameron.dev>

---------

Signed-off-by: Jason Cameron <git@jasoncameron.dev>
2025-06-25 09:31:33 -04:00
Outvi V
15bd9b6a44 Populate OpenGraph configurations to Opens.OpenGraph (#717)
* chore: read OpenGraph configurations

* docs: update CHANGELOG
2025-06-24 15:12:26 +00:00
dependabot[bot]
1ca531b930 build(deps): bump the gomod group with 4 updates (#709)
Bumps the gomod group with 4 updates: [github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus](https://github.com/grpc-ecosystem/go-grpc-middleware), [github.com/grpc-ecosystem/go-grpc-middleware/v2](https://github.com/grpc-ecosystem/go-grpc-middleware), [google.golang.org/grpc](https://github.com/grpc/grpc-go) and [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery).


Updates `github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus` from 1.0.1 to 1.1.0
- [Release notes](https://github.com/grpc-ecosystem/go-grpc-middleware/releases)
- [Commits](https://github.com/grpc-ecosystem/go-grpc-middleware/compare/providers/prometheus/v1.0.1...v1.1.0)

Updates `github.com/grpc-ecosystem/go-grpc-middleware/v2` from 2.1.0 to 2.3.2
- [Release notes](https://github.com/grpc-ecosystem/go-grpc-middleware/releases)
- [Commits](https://github.com/grpc-ecosystem/go-grpc-middleware/compare/v2.1.0...v2.3.2)

Updates `google.golang.org/grpc` from 1.72.2 to 1.73.0
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.72.2...v1.73.0)

Updates `k8s.io/apimachinery` from 0.33.1 to 0.33.2
- [Commits](https://github.com/kubernetes/apimachinery/compare/v0.33.1...v0.33.2)

---
updated-dependencies:
- dependency-name: github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus
  dependency-version: 1.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: gomod
- dependency-name: github.com/grpc-ecosystem/go-grpc-middleware/v2
  dependency-version: 2.3.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: gomod
- dependency-name: google.golang.org/grpc
  dependency-version: 1.73.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: gomod
- dependency-name: k8s.io/apimachinery
  dependency-version: 0.33.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: gomod
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-23 15:59:08 -04:00
459 changed files with 23698 additions and 5242 deletions

12
.devcontainer/Dockerfile Normal file
View File

@@ -0,0 +1,12 @@
FROM ghcr.io/xe/devcontainer-base/pre/go
WORKDIR /app
COPY go.mod go.sum package.json package-lock.json ./
RUN apt-get update \
&& apt-get -y install zstd brotli redis \
&& mkdir -p /home/vscode/.local/share/fish \
&& chown -R vscode:vscode /home/vscode/.local/share/fish \
&& chown -R vscode:vscode /go
CMD ["/usr/bin/sleep", "infinity"]

13
.devcontainer/README.md Normal file
View File

@@ -0,0 +1,13 @@
# Anubis Dev Container
Anubis offers a [development container](https://containers.dev/) image in order to make it easier to contribute to the project. This image is based on [Xe/devcontainer-base/go](https://github.com/Xe/devcontainer-base/tree/main/src/go), which is based on Debian Bookworm with the following customizations:
- [Fish](https://fishshell.com/) as the shell complete with a custom theme
- [Go](https://go.dev) at the most recent stable version
- [Node.js](https://nodejs.org/en) at the most recent stable version
- [Atuin](https://atuin.sh/) to sync shell history between your host OS and the development container
- [Docker](https://docker.com) to manage and build Anubis container images from inside the development container
- [Ko](https://ko.build/) to build production-ready Anubis container images
- [Neovim](https://neovim.io/) for use with Git
This development container is tested and known to work with [Visual Studio Code](https://code.visualstudio.com/). If you run into problems with it outside of VS Code, please file an issue and let us know what editor you are using.

View File

@@ -0,0 +1,34 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/debian
{
"name": "Dev",
"dockerComposeFile": [
"./docker-compose.yaml"
],
"service": "workspace",
"workspaceFolder": "/workspace/anubis",
"postStartCommand": "bash ./.devcontainer/poststart.sh",
"features": {
"ghcr.io/xe/devcontainer-features/ko:1.1.0": {},
"ghcr.io/devcontainers/features/github-cli:1": {}
},
"initializeCommand": "mkdir -p ${localEnv:HOME}${localEnv:USERPROFILE}/.local/share/atuin",
"customizations": {
"vscode": {
"extensions": [
"esbenp.prettier-vscode",
"ms-azuretools.vscode-containers",
"golang.go",
"unifiedjs.vscode-mdx",
"a-h.templ",
"redhat.vscode-yaml",
"streetsidesoftware.code-spell-checker"
],
"settings": {
"chat.instructionsFilesLocations": {
".github/copilot-instructions.md": true
}
}
}
}
}

View File

@@ -0,0 +1,26 @@
services:
playwright:
image: mcr.microsoft.com/playwright:v1.52.0-noble
init: true
network_mode: service:workspace
command:
- /bin/sh
- -c
- npx -y playwright@1.52.0 run-server --port 9001 --host 0.0.0.0
valkey:
image: valkey/valkey:8
pull_policy: always
# VS Code workspace service
workspace:
image: ghcr.io/techarohq/anubis/devcontainer
build:
context: ..
dockerfile: .devcontainer/Dockerfile
volumes:
- ../:/workspace/anubis:cached
environment:
VALKEY_URL: redis://valkey:6379/0
#entrypoint: ["/usr/bin/sleep", "infinity"]
user: vscode

View File

@@ -0,0 +1,9 @@
#!/usr/bin/env bash
pwd
npm ci &
go mod download &
go install ./utils/cmd/... &
wait

3
.github/FUNDING.yml vendored
View File

@@ -1,2 +1,3 @@
patreon: cadey
github: xe
github: xe
liberapay: Xe

61
.github/ISSUE_TEMPLATE/bug_report.yaml vendored Normal file
View File

@@ -0,0 +1,61 @@
name: Bug report
description: Create a report to help us improve
body:
- type: textarea
id: description-of-bug
attributes:
label: Describe the bug
description: A clear and concise description of what the bug is.
placeholder: I can reliably get an error when...
validations:
required: true
- type: textarea
id: steps-to-reproduce
attributes:
label: Steps to reproduce
description: |
Steps to reproduce the behavior.
placeholder: |
1. Go to the following url...
2. Click on...
3. You get the following error: ...
validations:
required: true
- type: textarea
id: expected-behavior
attributes:
label: Expected behavior
description: |
A clear and concise description of what you expected to happen.
Ideally also describe *why* you expect it to happen.
placeholder: Instead of displaying an error, it would...
validations:
required: true
- type: input
id: version-os
attributes:
label: Your operating system and its version.
description: Unsure? Visit https://whatsmyos.com/
placeholder: Android 13
validations:
required: true
- type: input
id: version-browser
attributes:
label: Your browser and its version.
description: Unsure? Visit https://www.whatsmybrowser.org/
placeholder: Firefox 142
validations:
required: true
- type: textarea
id: additional-context
attributes:
label: Additional context
description: Add any other context about the problem here.

5
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Security
url: https://techaro.lol/contact
about: Do not file security reports here. Email security@techaro.lol.

View File

@@ -0,0 +1,39 @@
name: Feature request
description: Suggest an idea for this project
title: '[Feature request] '
body:
- type: textarea
id: description-of-bug
attributes:
label: Is your feature request related to a problem? Please describe.
description: A clear and concise description of what the problem is that made you submit this report.
placeholder: I am always frustrated, when...
validations:
required: true
- type: textarea
id: description-of-solution
attributes:
label: Solution you would like.
description: A clear and concise description of what you want to happen.
placeholder: Instead of behaving like this, there should be...
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Describe alternatives you have considered.
description: A clear and concise description of any alternative solutions or features you have considered.
placeholder: Another workaround that would work, is...
validations:
required: false
- type: textarea
id: additional-context
attributes:
label: Additional context
description: Add any other context (such as mock-ups, proof of concepts or screenshots) about the feature request here.
validations:
required: false

View File

@@ -9,3 +9,4 @@ Checklist:
- [ ] Added a description of the changes to the `[Unreleased]` section of docs/docs/CHANGELOG.md
- [ ] Added test cases to [the relevant parts of the codebase](https://anubis.techaro.lol/docs/developer/code-quality)
- [ ] Ran integration tests `npm run test:integration` (unsupported on Windows, please use WSL)
- [ ] All of my commits have [verified signatures](https://anubis.techaro.lol/docs/developer/signed-commits)

View File

@@ -3,3 +3,23 @@ https
ssh
ubuntu
workarounds
rjack
msgbox
xeact
ABee
tencent
maintnotifications
azurediamond
cooldown
verifyfcrdns
Spintax
spintax
clampip
pseudoprofound
reimagining
iocaine
admins
fout
iplist
NArg
blocklists

View File

@@ -84,8 +84,17 @@
^\Q.github/workflows/spelling.yml\E$
^data/crawlers/
^docs/blog/tags\.yml$
^docs/docs/user/known-instances.md$
^docs/manifest/.*$
^docs/static/\.nojekyll$
^internal/glob/glob_test.go$
^internal/honeypot/naive/affirmations\.txt$
^internal/honeypot/naive/spintext\.txt$
^internal/honeypot/naive/titles\.txt$
^lib/config/testdata/bad/unparseable\.json$
^lib/localization/.*_test.go$
^lib/localization/locales/.*\.json$
^lib/policy/config/testdata/bad/unparseable\.json$
^test/.*$
ignore$
robots.txt

View File

@@ -1,31 +1,44 @@
acs
aeacus
Actorified
actorifiedstore
actorify
Aibrew
alibaba
alrest
amazonbot
anthro
anubis
anubistest
apk
apnic
APNICRANDNETAU
Applebot
archlinux
arpa
asnc
asnchecker
asns
aspirational
atuin
azuretools
badregexes
bbolt
bdba
berr
bezier
bingbot
bitcoin
blogging
Bitcoin
bitrate
Bluesky
blueskybot
boi
Bokm
botnet
botstopper
BPort
Brightbot
broked
buildah
byteslice
Bytespider
cachebuster
cachediptoasn
@@ -33,7 +46,7 @@ Caddyfile
caninetools
Cardyb
celchecker
CELPHASE
celphase
cerr
certresolver
cespare
@@ -42,24 +55,31 @@ cgr
chainguard
chall
challengemozilla
challengetest
checkpath
checkresult
chibi
cidranger
ckie
cloudflare
Codespaces
confd
connnection
containerbuild
containerregistry
coreutils
Cotoyogi
CRDs
Cromite
crt
Cscript
daemonizing
databento
dayjob
DDOS
Debian
debrpm
decaymap
devcontainers
Diffbot
discordapp
discordbot
@@ -67,13 +87,19 @@ distros
dnf
dnsbl
dnserr
DNSTTL
domainhere
dracula
dronebl
droneblresponse
dropin
dsilence
duckduckbot
eerror
ellenjoe
emacs
enbyware
etld
everyones
evilbot
evilsite
@@ -83,13 +109,19 @@ externalfetcher
extldflags
facebookgo
Factset
fahedouch
fastcgi
FCr
fcrdns
fediverse
ffprobe
financials
finfos
Firecrawl
flagenv
Fordola
forgejo
forwardauth
fsys
fullchain
gaissmai
@@ -97,54 +129,69 @@ Galvus
geoip
geoipchecker
gha
GHSA
Ghz
gipc
gitea
GLM
godotenv
goland
gomod
goodbot
googlebot
gopsutil
govulncheck
goyaml
GPG
GPT
gptbot
Graphene
grpcprom
grw
gzw
Hashcash
hashrate
headermap
healthcheck
hebis
healthz
hec
helpdesk
Hetzner
hmc
homelab
hostable
htmlc
htmx
httpdebug
Huawei
huawei
hypertext
iaskspider
iaso
iat
ifm
Imagesift
imgproxy
impressum
inbox
ingressed
inp
internets
IPTo
iptoasn
isp
iss
isset
ivh
Jenomis
JGit
jhjj
joho
journalctl
jshelter
JWTs
kagi
kagibot
keikaku
Keyfunc
keypair
KHTML
kinda
@@ -153,17 +200,18 @@ lcj
ldflags
letsencrypt
Lexentale
lfc
lgbt
licend
licstart
lightpanda
LIMSA
limsa
Linting
linuxbrew
listor
LLU
loadbalancer
lol
LOMINSA
lominsa
maintainership
malware
mcr
@@ -171,25 +219,39 @@ memes
metarefresh
metrix
mimi
minica
Minfilia
mistralai
mnt
Mojeek
mojeekbot
mozilla
myclient
mymaster
mypass
myuser
nbf
nepeat
netsurf
nginx
nicksnyder
nikandfor
nobots
NONINFRINGEMENT
nosleep
nullglob
oci
OCOB
ogtags
ogtag
oklch
omgili
omgilibot
openai
opendns
opengraph
openrc
oswald
pag
pagegen
palemoon
Pangu
parseable
@@ -203,11 +265,15 @@ pipefail
pki
podkova
podman
Postgre
poststart
prebaked
privkey
promauto
promhttp
proofofwork
publicsuffix
purejs
pwcmd
pwuser
qualys
@@ -216,41 +282,49 @@ qwantbot
rac
rawler
rcvar
redhat
redir
redirectscheme
refactors
relayd
remoteip
reputational
reqmeta
Rhul
risc
ruleset
runlevels
RUnlock
runtimedir
runtimedirectory
Ryzen
sas
sasl
Scumm
screenshots
searchbot
searx
sebest
secretplans
selfsigned
Semrush
Seo
setsebool
shellcheck
shirou
shoneypot
shopt
Sidetrade
simprint
sitemap
sls
sni
Sourceware
snipster
Spambot
spammer
sparkline
spyderbot
srv
stackoverflow
startprecmd
stoppostcmd
storetest
subgrid
subr
subrequest
@@ -258,8 +332,12 @@ SVCNAME
tagline
tarballs
tarrif
taviso
tbn
tbr
techaro
techarohq
telegrambot
templ
templruntime
testarea
@@ -268,34 +346,42 @@ thoth
thothmock
Tik
Timpibot
TLog
traefik
trunc
uberspace
unixhttpd
Unbreak
unbreakdocker
unifiedjs
unmarshal
unparseable
uuidgen
uvx
UXP
valkey
Varis
Velen
vendored
vhosts
videotest
waitloop
vkbot
VKE
vnd
VPS
Vultr
weblate
webmaster
webpage
websecure
websites
Webzio
whois
wildbase
withthothmock
wolfbeast
wordpress
Workaround
workaround
workdir
wpbot
xcaddy
Xeact
XCircle
xeiaso
xeserv
xesite
@@ -304,13 +390,17 @@ xff
XForwarded
XNG
XOB
XOriginal
XReal
Y'shtola
yae
YAMLTo
Yda
yeet
yeetfile
yourdomain
yoursite
yyz
Zenos
zizmor
zombocom
zos

View File

@@ -132,3 +132,7 @@ go install(?:\s+[a-z]+\.[-@\w/.]+)+
# hit-count: 1 file-count: 1
# microsoft
\b(?:https?://|)(?:(?:(?:blogs|download\.visualstudio|docs|msdn2?|research)\.|)microsoft|blogs\.msdn)\.co(?:m|\.\w\w)/[-_a-zA-Z0-9()=./%]*
# hit-count: 1 file-count: 1
# data url
\bdata:[-a-zA-Z=;:/0-9+]*,\S*

View File

@@ -8,6 +8,8 @@ updates:
github-actions:
patterns:
- "*"
cooldown:
default-days: 7
- package-ecosystem: gomod
directory: /
@@ -17,6 +19,8 @@ updates:
gomod:
patterns:
- "*"
cooldown:
default-days: 7
- package-ecosystem: npm
directory: /
@@ -26,3 +30,5 @@ updates:
npm:
patterns:
- "*"
cooldown:
default-days: 7

View File

@@ -0,0 +1,72 @@
name: Asset Build Verification
on:
push:
branches: ["main"]
pull_request:
branches: ["main"]
permissions:
contents: read
jobs:
asset_verification:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: build essential
run: |
sudo apt-get update
sudo apt-get install -y build-essential
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
with:
node-version: '24.11.0'
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: '1.25.4'
- name: install node deps
run: |
npm ci
- name: Check for uncommitted changes before asset build
id: check-changes-before
run: |
if [[ -n $(git status --porcelain) ]]; then
echo "has_changes=true" >> $GITHUB_OUTPUT
else
echo "has_changes=false" >> $GITHUB_OUTPUT
fi
- name: Fail if there are uncommitted changes before build
if: steps.check-changes-before.outputs.has_changes == 'true'
run: |
echo "There are uncommitted changes before running npm run assets"
git status
exit 1
- name: Run asset build
run: |
npm run assets
- name: Check for uncommitted changes after asset build
id: check-changes-after
run: |
if [[ -n $(git status --porcelain) ]]; then
echo "has_changes=true" >> $GITHUB_OUTPUT
else
echo "has_changes=false" >> $GITHUB_OUTPUT
fi
- name: Fail if assets generated changes
if: steps.check-changes-after.outputs.has_changes == 'true'
run: |
echo "npm run assets generated uncommitted changes. This indicates the repository has outdated generated files."
echo "Please run 'npm run assets' locally and commit the changes."
git status
git diff
exit 1

View File

@@ -2,7 +2,7 @@ name: Docker image builds (pull requests)
on:
pull_request:
branches: [ "main" ]
branches: ["main"]
env:
DOCKER_METADATA_SET_OUTPUT_ENV: "true"
@@ -15,39 +15,29 @@ jobs:
runs-on: ubuntu-24.04
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-tags: true
fetch-depth: 0
persist-credentials: false
- name: Set up Homebrew
uses: Homebrew/actions/setup-homebrew@master
- name: Setup Homebrew cellar cache
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
/home/linuxbrew/.linuxbrew/Cellar
/home/linuxbrew/.linuxbrew/bin
/home/linuxbrew/.linuxbrew/etc
/home/linuxbrew/.linuxbrew/include
/home/linuxbrew/.linuxbrew/lib
/home/linuxbrew/.linuxbrew/opt
/home/linuxbrew/.linuxbrew/sbin
/home/linuxbrew/.linuxbrew/share
/home/linuxbrew/.linuxbrew/var
key: ${{ runner.os }}-go-homebrew-cellar-${{ hashFiles('go.sum') }}
restore-keys: |
${{ runner.os }}-go-homebrew-cellar-
- name: Install Brew dependencies
- name: build essential
run: |
brew bundle
sudo apt-get update
sudo apt-get install -y build-essential
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
with:
node-version: '24.11.0'
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: '1.25.4'
- uses: ko-build/setup-ko@d006021bd0c28d1ce33a07e7943d48b079944c8d # v0.9
- name: Docker meta
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
with:
images: ghcr.io/${{ github.repository }}

View File

@@ -21,42 +21,32 @@ jobs:
runs-on: ubuntu-24.04
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-tags: true
fetch-depth: 0
persist-credentials: false
- name: build essential
run: |
sudo apt-get update
sudo apt-get install -y build-essential
- name: Set lowercase image name
run: |
echo "IMAGE=ghcr.io/${GITHUB_REPOSITORY,,}" >> $GITHUB_ENV
- name: Set up Homebrew
uses: Homebrew/actions/setup-homebrew@master
- name: Setup Homebrew cellar cache
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
with:
path: |
/home/linuxbrew/.linuxbrew/Cellar
/home/linuxbrew/.linuxbrew/bin
/home/linuxbrew/.linuxbrew/etc
/home/linuxbrew/.linuxbrew/include
/home/linuxbrew/.linuxbrew/lib
/home/linuxbrew/.linuxbrew/opt
/home/linuxbrew/.linuxbrew/sbin
/home/linuxbrew/.linuxbrew/share
/home/linuxbrew/.linuxbrew/var
key: ${{ runner.os }}-go-homebrew-cellar-${{ hashFiles('go.sum') }}
restore-keys: |
${{ runner.os }}-go-homebrew-cellar-
node-version: '24.11.0'
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: '1.25.4'
- name: Install Brew dependencies
run: |
brew bundle
- uses: ko-build/setup-ko@d006021bd0c28d1ce33a07e7943d48b079944c8d # v0.9
- name: Log into registry
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -64,7 +54,7 @@ jobs:
- name: Docker meta
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
with:
images: ${{ env.IMAGE }}
@@ -78,7 +68,7 @@ jobs:
SLOG_LEVEL: debug
- name: Generate artifact attestation
uses: actions/attest-build-provenance@e8998f949152b193b063cb0ec769d69d929409be # v2.4.0
uses: actions/attest-build-provenance@00014ed6ed5efc5b1ab7f7f34a39eb55d41aa4f8 # v3.1.0
with:
subject-name: ${{ env.IMAGE }}
subject-digest: ${{ steps.build.outputs.digest }}

View File

@@ -17,15 +17,15 @@ jobs:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
- name: Log into registry
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: techarohq
@@ -33,9 +33,12 @@ jobs:
- name: Docker meta
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
with:
images: ghcr.io/techarohq/anubis/docs
tags: |
type=sha,enable=true,priority=100,prefix=,suffix=,format=long
main
- name: Build and push
id: build
@@ -49,15 +52,15 @@ jobs:
platforms: linux/amd64
push: true
- name: Apply k8s manifests to aeacus
uses: actions-hub/kubectl@d50394b7d704525f93faefce1e65a6329ff67271 # v1.33.2
- name: Apply k8s manifests to limsa lominsa
uses: actions-hub/kubectl@f6d776bd78f4523e36d6c74d34f9941c242b2213 # v1.35.0
env:
KUBE_CONFIG: ${{ secrets.LIMSA_LOMINSA_KUBECONFIG }}
with:
args: apply -k docs/manifest
- name: Apply k8s manifests to aeacus
uses: actions-hub/kubectl@d50394b7d704525f93faefce1e65a6329ff67271 # v1.33.2
- name: Apply k8s manifests to limsa lominsa
uses: actions-hub/kubectl@f6d776bd78f4523e36d6c74d34f9941c242b2213 # v1.35.0
env:
KUBE_CONFIG: ${{ secrets.LIMSA_LOMINSA_KUBECONFIG }}
with:

View File

@@ -2,7 +2,7 @@ name: Docs test build
on:
pull_request:
branches: [ "main" ]
branches: ["main"]
permissions:
contents: read
@@ -13,18 +13,21 @@ jobs:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
- name: Docker meta
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
with:
images: ghcr.io/${{ github.repository }}/docs
images: ghcr.io/techarohq/anubis/docs
tags: |
type=sha,enable=true,priority=100,prefix=,suffix=,format=long
main
- name: Build and push
id: build

76
.github/workflows/go-mod-tidy-check.yml vendored Normal file
View File

@@ -0,0 +1,76 @@
name: Go Mod Tidy Check
on:
push:
branches: ["main"]
pull_request:
branches: ["main"]
permissions:
contents: read
jobs:
go_mod_tidy_check:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: '1.25.4'
- name: Check go.mod and go.sum in main directory
run: |
# Store original file state
cp go.mod go.mod.orig
cp go.sum go.sum.orig
# Run go mod tidy
go mod tidy
# Check if files changed
if ! diff -q go.mod.orig go.mod > /dev/null 2>&1; then
echo "ERROR: go.mod in main directory has changed after running 'go mod tidy'"
echo "Please run 'go mod tidy' locally and commit the changes"
diff go.mod.orig go.mod
exit 1
fi
if ! diff -q go.sum.orig go.sum > /dev/null 2>&1; then
echo "ERROR: go.sum in main directory has changed after running 'go mod tidy'"
echo "Please run 'go mod tidy' locally and commit the changes"
diff go.sum.orig go.sum
exit 1
fi
echo "SUCCESS: go.mod and go.sum in main directory are tidy"
- name: Check go.mod and go.sum in test directory
run: |
cd test
# Store original file state
cp go.mod go.mod.orig
cp go.sum go.sum.orig
# Run go mod tidy
go mod tidy
# Check if files changed
if ! diff -q go.mod.orig go.mod > /dev/null 2>&1; then
echo "ERROR: go.mod in test directory has changed after running 'go mod tidy'"
echo "Please run 'go mod tidy' locally and commit the changes"
diff go.mod.orig go.mod
exit 1
fi
if ! diff -q go.sum.orig go.sum > /dev/null 2>&1; then
echo "ERROR: go.sum in test directory has changed after running 'go mod tidy'"
echo "Please run 'go mod tidy' locally and commit the changes"
diff go.sum.orig go.sum
exit 1
fi
echo "SUCCESS: go.mod and go.sum in test directory are tidy"

View File

@@ -2,9 +2,9 @@ name: Go
on:
push:
branches: [ "main" ]
branches: ["main"]
pull_request:
branches: [ "main" ]
branches: ["main"]
permissions:
contents: read
@@ -15,77 +15,50 @@ jobs:
#runs-on: alrest-techarohq
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: build essential
run: |
sudo apt-get update
sudo apt-get install -y build-essential
- name: build essential
run: |
sudo apt-get update
sudo apt-get install -y build-essential
- name: Set up Homebrew
uses: Homebrew/actions/setup-homebrew@master
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
with:
node-version: '24.11.0'
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: '1.25.4'
- name: Setup Homebrew cellar cache
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
/home/linuxbrew/.linuxbrew/Cellar
/home/linuxbrew/.linuxbrew/bin
/home/linuxbrew/.linuxbrew/etc
/home/linuxbrew/.linuxbrew/include
/home/linuxbrew/.linuxbrew/lib
/home/linuxbrew/.linuxbrew/opt
/home/linuxbrew/.linuxbrew/sbin
/home/linuxbrew/.linuxbrew/share
/home/linuxbrew/.linuxbrew/var
key: ${{ runner.os }}-go-homebrew-cellar-${{ hashFiles('go.sum') }}
restore-keys: |
${{ runner.os }}-go-homebrew-cellar-
- name: Cache playwright binaries
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
id: playwright-cache
with:
path: |
~/.cache/ms-playwright
key: ${{ runner.os }}-playwright-${{ hashFiles('**/go.sum') }}
- name: Install Brew dependencies
run: |
brew bundle
- name: install node deps
run: |
npm ci
- name: Setup Golang caches
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-golang-
- name: install playwright browsers
run: |
npx --no-install playwright@1.52.0 install --with-deps
npx --no-install playwright@1.52.0 run-server --port 9001 &
- name: Cache playwright binaries
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
id: playwright-cache
with:
path: |
~/.cache/ms-playwright
key: ${{ runner.os }}-playwright-${{ hashFiles('**/go.sum') }}
- name: Build
run: npm run build
- name: install node deps
run: |
npm ci
- name: Test
run: npm run test
- name: install playwright browsers
run: |
npx --no-install playwright@1.52.0 install --with-deps
npx --no-install playwright@1.52.0 run-server --port 9001 &
- name: Lint with staticcheck
uses: dominikh/staticcheck-action@024238d2898c874f26d723e7d0ff4308c35589a2 # v1.4.0
with:
version: "latest"
- name: Build
run: npm run build
- name: Test
run: npm run test
- name: Lint with staticcheck
uses: dominikh/staticcheck-action@fe1dd0c3658873b46f8c9bb3291096a617310ca6 # v1.3.1
with:
version: "latest"
- name: Govulncheck
run: |
go tool govulncheck ./...
- name: Govulncheck
run: |
go tool govulncheck ./...

View File

@@ -1,8 +1,9 @@
name: Package builds (stable)
on:
release:
types: [published]
workflow_dispatch:
# release:
# types: [published]
permissions:
contents: write
@@ -13,67 +14,40 @@ jobs:
#runs-on: alrest-techarohq
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
fetch-tags: true
fetch-depth: 0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
fetch-tags: true
fetch-depth: 0
- name: build essential
run: |
sudo apt-get update
sudo apt-get install -y build-essential
- name: build essential
run: |
sudo apt-get update
sudo apt-get install -y build-essential
- name: Set up Homebrew
uses: Homebrew/actions/setup-homebrew@master
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
with:
node-version: '24.11.0'
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: '1.25.4'
- name: Setup Homebrew cellar cache
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
/home/linuxbrew/.linuxbrew/Cellar
/home/linuxbrew/.linuxbrew/bin
/home/linuxbrew/.linuxbrew/etc
/home/linuxbrew/.linuxbrew/include
/home/linuxbrew/.linuxbrew/lib
/home/linuxbrew/.linuxbrew/opt
/home/linuxbrew/.linuxbrew/sbin
/home/linuxbrew/.linuxbrew/share
/home/linuxbrew/.linuxbrew/var
key: ${{ runner.os }}-go-homebrew-cellar-${{ hashFiles('go.sum') }}
restore-keys: |
${{ runner.os }}-go-homebrew-cellar-
- name: install node deps
run: |
npm ci
- name: Install Brew dependencies
run: |
brew bundle
- name: Build Packages
run: |
go tool yeet
- name: Setup Golang caches
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-golang-
- name: install node deps
run: |
npm ci
- name: Build Packages
run: |
go tool yeet
- name: Upload released artifacts
env:
GITHUB_TOKEN: ${{ github.TOKEN }}
RELEASE_VERSION: ${{github.event.release.tag_name}}
shell: bash
run: |
RELEASE="${RELEASE_VERSION}"
cd var
for file in *; do
gh release upload $RELEASE $file
done
- name: Upload released artifacts
env:
GITHUB_TOKEN: ${{ github.TOKEN }}
RELEASE_VERSION: ${{github.event.release.tag_name}}
shell: bash
run: |
RELEASE="${RELEASE_VERSION}"
cd var
for file in *; do
gh release upload $RELEASE $file
done

View File

@@ -2,9 +2,9 @@ name: Package builds (unstable)
on:
push:
branches: [ "main" ]
branches: ["main"]
pull_request:
branches: [ "main" ]
branches: ["main"]
permissions:
contents: read
@@ -15,60 +15,33 @@ jobs:
#runs-on: alrest-techarohq
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
fetch-tags: true
fetch-depth: 0
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
fetch-tags: true
fetch-depth: 0
- name: build essential
run: |
sudo apt-get update
sudo apt-get install -y build-essential
- name: build essential
run: |
sudo apt-get update
sudo apt-get install -y build-essential
- name: Set up Homebrew
uses: Homebrew/actions/setup-homebrew@master
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
with:
node-version: '24.11.0'
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: '1.25.4'
- name: Setup Homebrew cellar cache
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
/home/linuxbrew/.linuxbrew/Cellar
/home/linuxbrew/.linuxbrew/bin
/home/linuxbrew/.linuxbrew/etc
/home/linuxbrew/.linuxbrew/include
/home/linuxbrew/.linuxbrew/lib
/home/linuxbrew/.linuxbrew/opt
/home/linuxbrew/.linuxbrew/sbin
/home/linuxbrew/.linuxbrew/share
/home/linuxbrew/.linuxbrew/var
key: ${{ runner.os }}-go-homebrew-cellar-${{ hashFiles('go.sum') }}
restore-keys: |
${{ runner.os }}-go-homebrew-cellar-
- name: install node deps
run: |
npm ci
- name: Install Brew dependencies
run: |
brew bundle
- name: Build Packages
run: |
go tool yeet
- name: Setup Golang caches
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-golang-
- name: install node deps
run: |
npm ci
- name: Build Packages
run: |
go tool yeet
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: packages
path: var/*
- uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: packages
path: var/*

64
.github/workflows/smoke-tests.yml vendored Normal file
View File

@@ -0,0 +1,64 @@
name: Smoke tests
on:
push:
branches: ["main"]
pull_request:
branches: ["main"]
permissions:
contents: read
jobs:
smoke-test:
strategy:
matrix:
test:
- default-config-macro
- docker-registry
- double_slash
- forced-language
- git-clone
- git-push
- healthcheck
- i18n
- log-file
- nginx
- palemoon/amd64
#- palemoon/i386
- robots_txt
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
with:
node-version: "24.11.0"
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: "1.25.4"
- uses: ko-build/setup-ko@d006021bd0c28d1ce33a07e7943d48b079944c8d # v0.9
- name: Install utils
run: |
go install ./utils/cmd/...
- name: Run test
run: |
cd test/${{ matrix.test }}
backoff-retry --try-count 10 ./test.sh
- name: Sanitize artifact name
if: always()
run: echo "ARTIFACT_NAME=${{ matrix.test }}" | sed 's|/|-|g' >> $GITHUB_ENV
- name: Upload artifact
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f
if: always()
with:
name: ${{ env.ARTIFACT_NAME }}
path: test/${{ matrix.test }}/var

View File

@@ -18,19 +18,19 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-tags: true
fetch-depth: 0
persist-credentials: false
- name: Log into registry
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
- name: Build and push
run: |
cd ./test/ssh-ci

View File

@@ -12,26 +12,34 @@ permissions:
jobs:
ssh:
if: github.repository == 'TecharoHQ/anubis'
runs-on: ubuntu-24.04
runs-on: alrest-techarohq
strategy:
matrix:
host:
- ubuntu@riscv64.techaro.lol
- ci@ppc64le.techaro.lol
- riscv64
- ppc64le
- aarch64-4k
- aarch64-16k
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-tags: true
fetch-depth: 0
persist-credentials: false
- name: Install CI target SSH key
uses: shimataro/ssh-key-action@d4fffb50872869abe2d9a9098a6d9c5aa7d16be4 # v2.7.0
with:
key: ${{ secrets.CI_SSH_KEY }}
name: id_rsa
known_hosts: ${{ secrets.CI_SSH_KNOWN_HOSTS }}
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
go-version: '1.25.4'
- name: Run CI
run: bash test/ssh-ci/rigging.sh ${{ matrix.host }}
run: go run ./utils/cmd/backoff-retry bash test/ssh-ci/rigging.sh ${{ matrix.host }}
env:
GITHUB_RUN_ID: ${{ github.run_id }}

View File

@@ -16,12 +16,12 @@ jobs:
security-events: write
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Install the latest version of uv
uses: astral-sh/setup-uv@445689ea25e0de0a23313031f5fe577c74ae45a1 # v6.3.0
uses: astral-sh/setup-uv@681c641aba71e4a1c380be3ab5e12ad51f415867 # v7.1.6
- name: Run zizmor 🌈
run: uvx zizmor --format sarif . > results.sarif
@@ -29,7 +29,7 @@ jobs:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload SARIF file
uses: github/codeql-action/upload-sarif@ce28f5bb42b7a9f2c824e633a3f6ee835bab6858 # v3.29.0
uses: github/codeql-action/upload-sarif@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v4.31.9
with:
sarif_file: results.sarif
category: zizmor

2
.gitignore vendored
View File

@@ -20,3 +20,5 @@ node_modules
# how does this get here
doc/VERSION
web/static/locales/*.json

11
.vscode/extensions.json vendored Normal file
View File

@@ -0,0 +1,11 @@
{
"recommendations": [
"esbenp.prettier-vscode",
"ms-azuretools.vscode-containers",
"golang.go",
"unifiedjs.vscode-mdx",
"a-h.templ",
"redhat.vscode-yaml",
"streetsidesoftware.code-spell-checker"
]
}

27
.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,27 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Launch Package",
"type": "go",
"request": "launch",
"mode": "auto",
"program": "${fileDirname}"
},
{
"name": "Anubis [dev]",
"command": "npm run dev",
"request": "launch",
"type": "node-terminal"
},
{
"name": "Start Docs",
"command": "cd docs && npm ci && npm run start",
"request": "launch",
"type": "node-terminal"
}
]
}

View File

@@ -9,6 +9,7 @@
![GitHub go.mod Go version](https://img.shields.io/github/go-mod/go-version/TecharoHQ/anubis)
![language count](https://img.shields.io/github/languages/count/TecharoHQ/anubis)
![repo size](https://img.shields.io/github/repo-size/TecharoHQ/anubis)
[![GitHub Sponsors](https://img.shields.io/github/sponsors/Xe)](https://github.com/sponsors/Xe)
## Sponsors
@@ -19,6 +20,9 @@ Anubis is brought to you by sponsors and donors like:
<a href="https://www.raptorcs.com/content/base/products.html">
<img src="./docs/static/img/sponsors/raptor-computing-logo.webp" alt="Raptor Computing Systems" height=64 />
</a>
<a href="https://databento.com/?utm_source=anubis&utm_medium=sponsor&utm_campaign=anubis">
<img src="./docs/static/img/sponsors/databento-logo.webp" alt="Databento" />
</a>
### Gold Tier
@@ -40,6 +44,20 @@ Anubis is brought to you by sponsors and donors like:
<a href="https://wildbase.xyz/">
<img src="./docs/static/img/sponsors/wildbase-logo.webp" alt="Wildbase" height="64">
</a>
<a href="https://emma.pet">
<img
src="./docs/static/img/sponsors/nepeat-logo.webp"
alt="Cat eyes over the word Emma in a serif font"
height="64"
/>
</a>
<a href="https://fabulous.systems/">
<img
src="./docs/static/img/sponsors/fabulous-systems.webp"
alt="Cat eyes over the word Emma in a serif font"
height="64"
/>
</a>
## Overview
@@ -51,7 +69,7 @@ Anubis is a bit of a nuclear response. This will result in your website being bl
In most cases, you should not need this and can probably get by using Cloudflare to protect a given origin. However, for circumstances where you can't or won't use Cloudflare, Anubis is there for you.
If you want to try this out, connect to [anubis.techaro.lol](https://anubis.techaro.lol).
If you want to try this out, visit the Anubis documentation site at [anubis.techaro.lol](https://anubis.techaro.lol).
## Support

13
SECURITY.md Normal file
View File

@@ -0,0 +1,13 @@
# Security Policy
Techaro follows the [Semver 2.0 scheme](https://semver.org/).
## Supported Versions
Techaro strives to support the two most recent minor versions of Anubis. Patches to those versions will be published as patch releases.
## Reporting a Vulnerability
Email security@techaro.lol with details on the vulnerability and reproduction steps. You will get a response as soon as possible.
Please take care to send your email as a mixed plaintext and HTML message. Messages with GPG signatures or that are plaintext only may be blocked by the spam filter.

View File

@@ -1 +1 @@
1.20.0-pre2
1.24.0

View File

@@ -11,12 +11,11 @@ var Version = "devel"
// CookieName is the name of the cookie that Anubis uses in order to validate
// access.
const CookieName = "techaro.lol-anubis-auth"
var CookieName = "techaro.lol-anubis"
// WithDomainCookieName is the name that is prepended to the per-domain cookie used when COOKIE_DOMAIN is set.
const WithDomainCookieName = "techaro.lol-anubis-auth-for-"
const TestCookieName = "techaro.lol-anubis-cookie-test-if-you-block-this-anubis-wont-work"
// TestCookieName is the name of the cookie that Anubis uses in order to check
// if cookies are enabled on the client's browser.
var TestCookieName = "techaro.lol-anubis-cookie-verification"
// CookieDefaultExpirationTime is the amount of time before the cookie/JWT expires.
const CookieDefaultExpirationTime = 7 * 24 * time.Hour
@@ -24,6 +23,9 @@ const CookieDefaultExpirationTime = 7 * 24 * time.Hour
// BasePrefix is a global prefix for all Anubis endpoints. Can be emptied to remove the prefix entirely.
var BasePrefix = ""
// PublicUrl is the externally accessible URL for this Anubis instance.
var PublicUrl = ""
// StaticPath is the location where all static Anubis assets are located.
const StaticPath = "/.within.website/x/cmd/anubis/"
@@ -33,3 +35,10 @@ const APIPrefix = "/.within.website/x/cmd/anubis/api/"
// DefaultDifficulty is the default "difficulty" (number of leading zeroes)
// that must be met by the client in order to pass the challenge.
const DefaultDifficulty = 4
// ForcedLanguage is the language being used instead of the one of the request's Accept-Language header
// if being set.
var ForcedLanguage = ""
// UseSimplifiedExplanation can be set to true for using the simplified explanation
var UseSimplifiedExplanation = false

View File

@@ -30,14 +30,15 @@ import (
"github.com/TecharoHQ/anubis"
"github.com/TecharoHQ/anubis/data"
"github.com/TecharoHQ/anubis/internal"
"github.com/TecharoHQ/anubis/internal/thoth"
libanubis "github.com/TecharoHQ/anubis/lib"
"github.com/TecharoHQ/anubis/lib/config"
botPolicy "github.com/TecharoHQ/anubis/lib/policy"
"github.com/TecharoHQ/anubis/lib/policy/config"
"github.com/TecharoHQ/anubis/lib/thoth"
"github.com/TecharoHQ/anubis/web"
"github.com/facebookgo/flagenv"
_ "github.com/joho/godotenv/autoload"
"github.com/prometheus/client_golang/prometheus/promhttp"
healthv1 "google.golang.org/grpc/health/grpc_health_v1"
)
var (
@@ -46,8 +47,16 @@ var (
bindNetwork = flag.String("bind-network", "tcp", "network family to bind HTTP to, e.g. unix, tcp")
challengeDifficulty = flag.Int("difficulty", anubis.DefaultDifficulty, "difficulty of the challenge")
cookieDomain = flag.String("cookie-domain", "", "if set, the top-level domain that the Anubis cookie will be valid for")
cookieDynamicDomain = flag.Bool("cookie-dynamic-domain", false, "if set, automatically set the cookie Domain value based on the request domain")
cookieExpiration = flag.Duration("cookie-expiration-time", anubis.CookieDefaultExpirationTime, "The amount of time the authorization cookie is valid for")
cookiePrefix = flag.String("cookie-prefix", anubis.CookieName, "prefix for browser cookies created by Anubis")
cookiePartitioned = flag.Bool("cookie-partitioned", false, "if true, sets the partitioned flag on Anubis cookies, enabling CHIPS support")
difficultyInJWT = flag.Bool("difficulty-in-jwt", false, "if true, adds a difficulty field in the JWT claims")
useSimplifiedExplanation = flag.Bool("use-simplified-explanation", false, "if true, replaces the text when clicking \"Why am I seeing this?\" with a more simplified text for a non-tech-savvy audience.")
forcedLanguage = flag.String("forced-language", "", "if set, this language is being used instead of the one from the request's Accept-Language header")
hs512Secret = flag.String("hs512-secret", "", "secret used to sign JWTs, uses ed25519 if not set")
cookieSecure = flag.Bool("cookie-secure", true, "if true, sets the secure flag on Anubis cookies")
cookieSameSite = flag.String("cookie-same-site", "None", "sets the same site option on Anubis cookies, will auto-downgrade None to Lax if cookie-secure is false. Valid values are None, Lax, Strict, and Default.")
ed25519PrivateKeyHex = flag.String("ed25519-private-key-hex", "", "private key used to sign JWTs, if not set a random one will be assigned")
ed25519PrivateKeyHexFile = flag.String("ed25519-private-key-hex-file", "", "file name containing value for ed25519-private-key-hex")
metricsBind = flag.String("metrics-bind", ":9090", "network address to bind metrics to")
@@ -59,9 +68,10 @@ var (
slogLevel = flag.String("slog-level", "INFO", "logging level (see https://pkg.go.dev/log/slog#hdr-Levels)")
stripBasePrefix = flag.Bool("strip-base-prefix", false, "if true, strips the base prefix from requests forwarded to the target server")
target = flag.String("target", "http://localhost:3923", "target to reverse proxy to, set to an empty string to disable proxying when only using auth request")
targetSNI = flag.String("target-sni", "", "if set, the value of the TLS handshake hostname when forwarding requests to the target")
targetSNI = flag.String("target-sni", "", "if set, TLS handshake hostname when forwarding requests to the target, if set to auto, use Host header")
targetHost = flag.String("target-host", "", "if set, the value of the Host header when forwarding requests to the target")
targetInsecureSkipVerify = flag.Bool("target-insecure-skip-verify", false, "if true, skips TLS validation for the backend")
targetDisableKeepAlive = flag.Bool("target-disable-keepalive", false, "if true, disables HTTP keep-alive for the backend")
healthcheck = flag.Bool("healthcheck", false, "run a health check against Anubis")
useRemoteAddress = flag.Bool("use-remote-address", false, "read the client's IP address from the network request, useful for debugging and running Anubis on bare metal")
debugBenchmarkJS = flag.Bool("debug-benchmark-js", false, "respond to every request with a challenge for benchmarking hashrate")
@@ -71,11 +81,14 @@ var (
extractResources = flag.String("extract-resources", "", "if set, extract the static resources to the specified folder")
webmasterEmail = flag.String("webmaster-email", "", "if set, displays webmaster's email on the reject page for appeals")
versionFlag = flag.Bool("version", false, "print Anubis version")
publicUrl = flag.String("public-url", "", "the externally accessible URL for this Anubis instance, used for constructing redirect URLs (e.g., for forwardAuth).")
xffStripPrivate = flag.Bool("xff-strip-private", true, "if set, strip private addresses from X-Forwarded-For")
customRealIPHeader = flag.String("custom-real-ip-header", "", "if set, read remote IP from header of this name (in case your environment doesn't set X-Real-IP header)")
thothInsecure = flag.Bool("thoth-insecure", false, "if set, connect to Thoth over plain HTTP/2, don't enable this unless support told you to")
thothURL = flag.String("thoth-url", "", "if set, URL for Thoth, the IP reputation database for Anubis")
thothToken = flag.String("thoth-token", "", "if set, API token for Thoth, the IP reputation database for Anubis")
thothInsecure = flag.Bool("thoth-insecure", false, "if set, connect to Thoth over plain HTTP/2, don't enable this unless support told you to")
thothURL = flag.String("thoth-url", "", "if set, URL for Thoth, the IP reputation database for Anubis")
thothToken = flag.String("thoth-token", "", "if set, API token for Thoth, the IP reputation database for Anubis")
jwtRestrictionHeader = flag.String("jwt-restriction-header", "X-Real-IP", "If set, the JWT is only valid if the current value of this header matched the value when the JWT was created")
)
func keyFromHex(value string) (ed25519.PrivateKey, error) {
@@ -92,7 +105,7 @@ func keyFromHex(value string) (ed25519.PrivateKey, error) {
}
func doHealthCheck() error {
resp, err := http.Get("http://localhost" + *metricsBind + anubis.BasePrefix + "/metrics")
resp, err := http.Get("http://localhost" + *metricsBind + "/healthz")
if err != nil {
return fmt.Errorf("failed to fetch metrics: %w", err)
}
@@ -105,8 +118,57 @@ func doHealthCheck() error {
return nil
}
// parseBindNetFromAddr determine bind network and address based on the given network and address.
func parseBindNetFromAddr(address string) (string, string) {
defaultScheme := "http://"
if !strings.Contains(address, "://") {
if strings.HasPrefix(address, ":") {
address = defaultScheme + "localhost" + address
} else {
address = defaultScheme + address
}
}
bindUri, err := url.Parse(address)
if err != nil {
log.Fatal(fmt.Errorf("failed to parse bind URL: %w", err))
}
switch bindUri.Scheme {
case "unix":
return "unix", bindUri.Path
case "tcp", "http", "https":
return "tcp", bindUri.Host
default:
log.Fatal(fmt.Errorf("unsupported network scheme %s in address %s", bindUri.Scheme, address))
}
return "", address
}
func parseSameSite(s string) http.SameSite {
switch strings.ToLower(s) {
case "none":
return http.SameSiteNoneMode
case "lax":
return http.SameSiteLaxMode
case "strict":
return http.SameSiteStrictMode
case "default":
return http.SameSiteDefaultMode
default:
log.Fatalf("invalid cookie same-site mode: %s, valid values are None, Lax, Strict, and Default", s)
}
return http.SameSiteDefaultMode
}
func setupListener(network string, address string) (net.Listener, string) {
formattedAddress := ""
if network == "" {
// keep compatibility
network, address = parseBindNetFromAddr(address)
}
switch network {
case "unix":
formattedAddress = "unix:" + address
@@ -146,7 +208,7 @@ func setupListener(network string, address string) (net.Listener, string) {
return listener, formattedAddress
}
func makeReverseProxy(target string, targetSNI string, targetHost string, insecureSkipVerify bool) (http.Handler, error) {
func makeReverseProxy(target string, targetSNI string, targetHost string, insecureSkipVerify bool, targetDisableKeepAlive bool) (http.Handler, error) {
targetUri, err := url.Parse(target)
if err != nil {
return nil, fmt.Errorf("failed to parse target URL: %w", err)
@@ -154,6 +216,10 @@ func makeReverseProxy(target string, targetSNI string, targetHost string, insecu
transport := http.DefaultTransport.(*http.Transport).Clone()
if targetDisableKeepAlive {
transport.DisableKeepAlives = true
}
// https://github.com/oauth2-proxy/oauth2-proxy/blob/4e2100a2879ef06aea1411790327019c1a09217c/pkg/upstream/http.go#L124
if targetUri.Scheme == "unix" {
// clean path up so we don't use the socket path in proxied requests
@@ -170,43 +236,34 @@ func makeReverseProxy(target string, targetSNI string, targetHost string, insecu
if insecureSkipVerify || targetSNI != "" {
transport.TLSClientConfig = &tls.Config{}
if insecureSkipVerify {
slog.Warn("TARGET_INSECURE_SKIP_VERIFY is set to true, TLS certificate validation will not be performed", "target", target)
transport.TLSClientConfig.InsecureSkipVerify = true
}
if targetSNI != "" {
transport.TLSClientConfig.ServerName = targetSNI
}
}
if insecureSkipVerify {
slog.Warn("TARGET_INSECURE_SKIP_VERIFY is set to true, TLS certificate validation will not be performed", "target", target)
transport.TLSClientConfig.InsecureSkipVerify = true
}
if targetSNI != "" && targetSNI != "auto" {
transport.TLSClientConfig.ServerName = targetSNI
}
rp := httputil.NewSingleHostReverseProxy(targetUri)
rp.Transport = transport
if targetHost != "" {
if targetHost != "" || targetSNI == "auto" {
originalDirector := rp.Director
rp.Director = func(req *http.Request) {
originalDirector(req)
req.Host = targetHost
if targetHost != "" {
req.Host = targetHost
}
if targetSNI == "auto" {
transport.TLSClientConfig.ServerName = req.Host
}
}
}
return rp, nil
}
func startDecayMapCleanup(ctx context.Context, s *libanubis.Server) {
ticker := time.NewTicker(1 * time.Hour)
defer ticker.Stop()
for {
select {
case <-ticker.C:
s.CleanupDecayMap()
case <-ctx.Done():
return
}
}
}
func main() {
flagenv.Parse()
flag.Parse()
@@ -216,7 +273,18 @@ func main() {
return
}
internal.InitSlog(*slogLevel)
internal.SetHealth("anubis", healthv1.HealthCheckResponse_NOT_SERVING)
lg := internal.InitSlog(*slogLevel, os.Stderr)
lg.Info("starting up Anubis")
if *healthcheck {
log.Println("running healthcheck")
if err := doHealthCheck(); err != nil {
log.Fatal(err)
}
return
}
if *extractResources != "" {
if err := extractEmbedFS(data.BotPolicies, ".", *extractResources); err != nil {
@@ -229,26 +297,39 @@ func main() {
return
}
// install signal handler
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer stop()
wg := new(sync.WaitGroup)
if *metricsBind != "" {
wg.Add(1)
go metricsServer(ctx, *lg.With("subsystem", "metrics"), wg.Done)
}
var rp http.Handler
// when using anubis via Systemd and environment variables, then it is not possible to set targe to an empty string but only to space
if strings.TrimSpace(*target) != "" {
var err error
rp, err = makeReverseProxy(*target, *targetSNI, *targetHost, *targetInsecureSkipVerify)
rp, err = makeReverseProxy(*target, *targetSNI, *targetHost, *targetInsecureSkipVerify, *targetDisableKeepAlive)
if err != nil {
log.Fatalf("can't make reverse proxy: %v", err)
}
}
ctx := context.Background()
if *cookieDomain != "" && *cookieDynamicDomain {
log.Fatalf("you can't set COOKIE_DOMAIN and COOKIE_DYNAMIC_DOMAIN at the same time")
}
// Thoth configuration
switch {
case *thothURL != "" && *thothToken == "":
slog.Warn("THOTH_URL is set but no THOTH_TOKEN is set")
lg.Warn("THOTH_URL is set but no THOTH_TOKEN is set")
case *thothURL == "" && *thothToken != "":
slog.Warn("THOTH_TOKEN is set but no THOTH_URL is set")
lg.Warn("THOTH_TOKEN is set but no THOTH_URL is set")
case *thothURL != "" && *thothToken != "":
slog.Debug("connecting to Thoth")
lg.Debug("connecting to Thoth")
thothClient, err := thoth.New(ctx, *thothURL, *thothToken, *thothInsecure)
if err != nil {
log.Fatalf("can't dial thoth at %s: %v", *thothURL, err)
@@ -257,10 +338,24 @@ func main() {
ctx = thoth.With(ctx, thothClient)
}
policy, err := libanubis.LoadPoliciesOrDefault(ctx, *policyFname, *challengeDifficulty)
lg.Info("loading policy file", "fname", *policyFname)
policy, err := libanubis.LoadPoliciesOrDefault(ctx, *policyFname, *challengeDifficulty, *slogLevel)
if err != nil {
log.Fatalf("can't parse policy file: %v", err)
}
lg = policy.Logger
lg.Debug("swapped to new logger")
slog.SetDefault(lg)
// Warn if persistent storage is used without a configured signing key
if policy.Store.IsPersistent() {
if *hs512Secret == "" && *ed25519PrivateKeyHex == "" && *ed25519PrivateKeyHexFile == "" {
lg.Warn("[misconfiguration] persistent storage backend is configured, but no private key is set. " +
"Challenges will be invalidated when Anubis restarts. " +
"Set HS512_SECRET, ED25519_PRIVATE_KEY_HEX, or ED25519_PRIVATE_KEY_HEX_FILE to ensure challenges survive service restarts. " +
"See: https://anubis.techaro.lol/docs/admin/installation#key-generation")
}
}
ruleErrorIDs := make(map[string]string)
for _, rule := range policy.Bots {
@@ -290,11 +385,15 @@ func main() {
"this may result in unexpected behavior")
}
var priv ed25519.PrivateKey
if *ed25519PrivateKeyHex != "" && *ed25519PrivateKeyHexFile != "" {
var ed25519Priv ed25519.PrivateKey
if *hs512Secret != "" && (*ed25519PrivateKeyHex != "" || *ed25519PrivateKeyHexFile != "") {
log.Fatal("do not specify both HS512 and ED25519 secrets")
} else if *hs512Secret != "" {
ed25519Priv = ed25519.PrivateKey(*hs512Secret)
} else if *ed25519PrivateKeyHex != "" && *ed25519PrivateKeyHexFile != "" {
log.Fatal("do not specify both ED25519_PRIVATE_KEY_HEX and ED25519_PRIVATE_KEY_HEX_FILE")
} else if *ed25519PrivateKeyHex != "" {
priv, err = keyFromHex(*ed25519PrivateKeyHex)
ed25519Priv, err = keyFromHex(*ed25519PrivateKeyHex)
if err != nil {
log.Fatalf("failed to parse and validate ED25519_PRIVATE_KEY_HEX: %v", err)
}
@@ -304,17 +403,17 @@ func main() {
log.Fatalf("failed to read ED25519_PRIVATE_KEY_HEX_FILE %s: %v", *ed25519PrivateKeyHexFile, err)
}
priv, err = keyFromHex(string(bytes.TrimSpace(hexFile)))
ed25519Priv, err = keyFromHex(string(bytes.TrimSpace(hexFile)))
if err != nil {
log.Fatalf("failed to parse and validate content of ED25519_PRIVATE_KEY_HEX_FILE: %v", err)
}
} else {
_, priv, err = ed25519.GenerateKey(rand.Reader)
_, ed25519Priv, err = ed25519.GenerateKey(rand.Reader)
if err != nil {
log.Fatalf("failed to generate ed25519 key: %v", err)
}
slog.Warn("generating random key, Anubis will have strange behavior when multiple instances are behind the same load balancer target, for more information: see https://anubis.techaro.lol/docs/admin/installation#key-generation")
lg.Warn("generating random key, Anubis will have strange behavior when multiple instances are behind the same load balancer target, for more information: see https://anubis.techaro.lol/docs/admin/installation#key-generation")
}
var redirectDomainsList []string
@@ -328,9 +427,14 @@ func main() {
redirectDomainsList = append(redirectDomainsList, strings.TrimSpace(domain))
}
} else {
slog.Warn("REDIRECT_DOMAINS is not set, Anubis will only redirect to the same domain a request is coming from, see https://anubis.techaro.lol/docs/admin/configuration/redirect-domains")
lg.Warn("REDIRECT_DOMAINS is not set, Anubis will only redirect to the same domain a request is coming from, see https://anubis.techaro.lol/docs/admin/configuration/redirect-domains")
}
anubis.CookieName = *cookiePrefix + "-auth"
anubis.TestCookieName = *cookiePrefix + "-cookie-verification"
anubis.ForcedLanguage = *forcedLanguage
anubis.UseSimplifiedExplanation = *useSimplifiedExplanation
// If OpenGraph configuration values are not set in the config file, use the
// values from flags / envvars.
if !policy.OpenGraph.Enabled {
@@ -341,43 +445,46 @@ func main() {
}
s, err := libanubis.New(libanubis.Options{
BasePrefix: *basePrefix,
StripBasePrefix: *stripBasePrefix,
Next: rp,
Policy: policy,
ServeRobotsTXT: *robotsTxt,
PrivateKey: priv,
CookieDomain: *cookieDomain,
CookieExpiration: *cookieExpiration,
CookiePartitioned: *cookiePartitioned,
RedirectDomains: redirectDomainsList,
Target: *target,
WebmasterEmail: *webmasterEmail,
BasePrefix: *basePrefix,
StripBasePrefix: *stripBasePrefix,
Next: rp,
Policy: policy,
TargetHost: *targetHost,
TargetSNI: *targetSNI,
TargetInsecureSkipVerify: *targetInsecureSkipVerify,
ServeRobotsTXT: *robotsTxt,
ED25519PrivateKey: ed25519Priv,
HS512Secret: []byte(*hs512Secret),
CookieDomain: *cookieDomain,
CookieDynamicDomain: *cookieDynamicDomain,
CookieExpiration: *cookieExpiration,
CookiePartitioned: *cookiePartitioned,
RedirectDomains: redirectDomainsList,
Target: *target,
WebmasterEmail: *webmasterEmail,
OpenGraph: policy.OpenGraph,
CookieSecure: *cookieSecure,
CookieSameSite: parseSameSite(*cookieSameSite),
PublicUrl: *publicUrl,
JWTRestrictionHeader: *jwtRestrictionHeader,
Logger: policy.Logger.With("subsystem", "anubis"),
DifficultyInJWT: *difficultyInJWT,
})
if err != nil {
log.Fatalf("can't construct libanubis.Server: %v", err)
}
wg := new(sync.WaitGroup)
// install signal handler
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer stop()
if *metricsBind != "" {
wg.Add(1)
go metricsServer(ctx, wg.Done)
}
go startDecayMapCleanup(ctx, s)
var h http.Handler
h = s
h = internal.CustomRealIPHeader(*customRealIPHeader, h)
h = internal.RemoteXRealIP(*useRemoteAddress, *bindNetwork, h)
h = internal.XForwardedForToXRealIP(h)
h = internal.XForwardedForUpdate(*xffStripPrivate, h)
h = internal.JA4H(h)
srv := http.Server{Handler: h, ErrorLog: internal.GetFilteredHTTPLogger()}
listener, listenerUrl := setupListener(*bindNetwork, *bind)
slog.Info(
lg.Info(
"listening",
"url", listenerUrl,
"difficulty", *challengeDifficulty,
@@ -391,6 +498,7 @@ func main() {
"base-prefix", *basePrefix,
"cookie-expiration-time", *cookieExpiration,
"rule-error-ids", ruleErrorIDs,
"public-url", *publicUrl,
)
go func() {
@@ -402,29 +510,41 @@ func main() {
}
}()
internal.SetHealth("anubis", healthv1.HealthCheckResponse_SERVING)
if err := srv.Serve(listener); !errors.Is(err, http.ErrServerClosed) {
log.Fatal(err)
}
wg.Wait()
}
func metricsServer(ctx context.Context, done func()) {
func metricsServer(ctx context.Context, lg slog.Logger, done func()) {
defer done()
mux := http.NewServeMux()
mux.Handle(anubis.BasePrefix+"/metrics", promhttp.Handler())
mux.Handle("/metrics", promhttp.Handler())
mux.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
st, ok := internal.GetHealth("anubis")
if !ok {
slog.Error("health service anubis does not exist, file a bug")
}
switch st {
case healthv1.HealthCheckResponse_NOT_SERVING:
http.Error(w, "NOT OK", http.StatusInternalServerError)
return
case healthv1.HealthCheckResponse_SERVING:
fmt.Fprintln(w, "OK")
return
default:
http.Error(w, "UNKNOWN", http.StatusFailedDependency)
return
}
})
srv := http.Server{Handler: mux, ErrorLog: internal.GetFilteredHTTPLogger()}
listener, metricsUrl := setupListener(*metricsBindNetwork, *metricsBind)
slog.Debug("listening for metrics", "url", metricsUrl)
if *healthcheck {
log.Println("running healthcheck")
if err := doHealthCheck(); err != nil {
log.Fatal(err)
}
return
}
lg.Debug("listening for metrics", "url", metricsUrl)
go func() {
<-ctx.Done()

View File

@@ -28,7 +28,7 @@ func main() {
flagenv.Parse()
flag.Parse()
internal.InitSlog(*slogLevel)
slog.SetDefault(internal.InitSlog(*slogLevel, os.Stderr))
koDockerRepo := strings.TrimSuffix(*dockerRepo, "/"+filepath.Base(*dockerRepo))
@@ -46,6 +46,11 @@ func main() {
)
}
if strings.Contains(*dockerTags, ",") {
newTags := strings.Join(strings.Split(*dockerTags, ","), "\n")
dockerTags = &newTags
}
setOutput("docker_image", strings.SplitN(*dockerTags, "\n", 2)[0])
version, err := run("git describe --tags --always --dirty")

View File

@@ -12,7 +12,7 @@ import (
"regexp"
"strings"
"github.com/TecharoHQ/anubis/lib/policy/config"
"github.com/TecharoHQ/anubis/lib/config"
"sigs.k8s.io/yaml"
)
@@ -29,7 +29,7 @@ var (
)
type RobotsRule struct {
UserAgent string
UserAgents []string
Disallows []string
Allows []string
CrawlDelay int
@@ -130,10 +130,26 @@ func main() {
}
}
func createRuleFromAccumulated(userAgents, disallows, allows []string, crawlDelay int) RobotsRule {
rule := RobotsRule{
UserAgents: make([]string, len(userAgents)),
Disallows: make([]string, len(disallows)),
Allows: make([]string, len(allows)),
CrawlDelay: crawlDelay,
}
copy(rule.UserAgents, userAgents)
copy(rule.Disallows, disallows)
copy(rule.Allows, allows)
return rule
}
func parseRobotsTxt(input io.Reader) ([]RobotsRule, error) {
scanner := bufio.NewScanner(input)
var rules []RobotsRule
var currentRule *RobotsRule
var currentUserAgents []string
var currentDisallows []string
var currentAllows []string
var currentCrawlDelay int
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
@@ -154,38 +170,42 @@ func parseRobotsTxt(input io.Reader) ([]RobotsRule, error) {
switch directive {
case "user-agent":
// Start a new rule section
if currentRule != nil {
rules = append(rules, *currentRule)
}
currentRule = &RobotsRule{
UserAgent: value,
Disallows: make([]string, 0),
Allows: make([]string, 0),
// If we have accumulated rules with directives and encounter a new user-agent,
// flush the current rules
if len(currentUserAgents) > 0 && (len(currentDisallows) > 0 || len(currentAllows) > 0 || currentCrawlDelay > 0) {
rule := createRuleFromAccumulated(currentUserAgents, currentDisallows, currentAllows, currentCrawlDelay)
rules = append(rules, rule)
// Reset for next group
currentUserAgents = nil
currentDisallows = nil
currentAllows = nil
currentCrawlDelay = 0
}
currentUserAgents = append(currentUserAgents, value)
case "disallow":
if currentRule != nil && value != "" {
currentRule.Disallows = append(currentRule.Disallows, value)
if len(currentUserAgents) > 0 && value != "" {
currentDisallows = append(currentDisallows, value)
}
case "allow":
if currentRule != nil && value != "" {
currentRule.Allows = append(currentRule.Allows, value)
if len(currentUserAgents) > 0 && value != "" {
currentAllows = append(currentAllows, value)
}
case "crawl-delay":
if currentRule != nil {
if len(currentUserAgents) > 0 {
if delay, err := parseIntSafe(value); err == nil {
currentRule.CrawlDelay = delay
currentCrawlDelay = delay
}
}
}
}
// Don't forget the last rule
if currentRule != nil {
rules = append(rules, *currentRule)
// Don't forget the last group of rules
if len(currentUserAgents) > 0 {
rule := createRuleFromAccumulated(currentUserAgents, currentDisallows, currentAllows, currentCrawlDelay)
rules = append(rules, rule)
}
// Mark blacklisted user agents (those with "Disallow: /")
@@ -211,10 +231,11 @@ func convertToAnubisRules(robotsRules []RobotsRule) []AnubisRule {
var anubisRules []AnubisRule
ruleCounter := 0
// Process each robots rule individually
for _, robotsRule := range robotsRules {
userAgent := robotsRule.UserAgent
userAgents := robotsRule.UserAgents
// Handle crawl delay as weight adjustment (do this first before any continues)
// Handle crawl delay
if robotsRule.CrawlDelay > 0 && *crawlDelay > 0 {
ruleCounter++
rule := AnubisRule{
@@ -223,20 +244,32 @@ func convertToAnubisRules(robotsRules []RobotsRule) []AnubisRule {
Weight: &config.Weight{Adjust: *crawlDelay},
}
if userAgent == "*" {
if len(userAgents) == 1 && userAgents[0] == "*" {
rule.Expression = &config.ExpressionOrList{
All: []string{"true"}, // Always applies
}
} else {
} else if len(userAgents) == 1 {
rule.Expression = &config.ExpressionOrList{
All: []string{fmt.Sprintf("userAgent.contains(%q)", userAgent)},
All: []string{fmt.Sprintf("userAgent.contains(%q)", userAgents[0])},
}
} else {
// Multiple user agents - use any block
var expressions []string
for _, ua := range userAgents {
if ua == "*" {
expressions = append(expressions, "true")
} else {
expressions = append(expressions, fmt.Sprintf("userAgent.contains(%q)", ua))
}
}
rule.Expression = &config.ExpressionOrList{
Any: expressions,
}
}
anubisRules = append(anubisRules, rule)
}
// Handle blacklisted user agents (complete deny/challenge)
// Handle blacklisted user agents
if robotsRule.IsBlacklist {
ruleCounter++
rule := AnubisRule{
@@ -244,21 +277,36 @@ func convertToAnubisRules(robotsRules []RobotsRule) []AnubisRule {
Action: *userAgentDeny,
}
if userAgent == "*" {
// This would block everything - convert to a weight adjustment instead
rule.Name = fmt.Sprintf("%s-global-restriction-%d", *policyName, ruleCounter)
rule.Action = "WEIGH"
rule.Weight = &config.Weight{Adjust: 20} // Increase difficulty significantly
rule.Expression = &config.ExpressionOrList{
All: []string{"true"}, // Always applies
if len(userAgents) == 1 {
userAgent := userAgents[0]
if userAgent == "*" {
// This would block everything - convert to a weight adjustment instead
rule.Name = fmt.Sprintf("%s-global-restriction-%d", *policyName, ruleCounter)
rule.Action = "WEIGH"
rule.Weight = &config.Weight{Adjust: 20} // Increase difficulty significantly
rule.Expression = &config.ExpressionOrList{
All: []string{"true"}, // Always applies
}
} else {
rule.Expression = &config.ExpressionOrList{
All: []string{fmt.Sprintf("userAgent.contains(%q)", userAgent)},
}
}
} else {
// Multiple user agents - use any block
var expressions []string
for _, ua := range userAgents {
if ua == "*" {
expressions = append(expressions, "true")
} else {
expressions = append(expressions, fmt.Sprintf("userAgent.contains(%q)", ua))
}
}
rule.Expression = &config.ExpressionOrList{
All: []string{fmt.Sprintf("userAgent.contains(%q)", userAgent)},
Any: expressions,
}
}
anubisRules = append(anubisRules, rule)
continue
}
// Handle specific disallow rules
@@ -276,9 +324,33 @@ func convertToAnubisRules(robotsRules []RobotsRule) []AnubisRule {
// Build CEL expression
var conditions []string
// Add user agent condition if not wildcard
if userAgent != "*" {
conditions = append(conditions, fmt.Sprintf("userAgent.contains(%q)", userAgent))
// Add user agent conditions
if len(userAgents) == 1 && userAgents[0] == "*" {
// Wildcard user agent - no user agent condition needed
} else if len(userAgents) == 1 {
conditions = append(conditions, fmt.Sprintf("userAgent.contains(%q)", userAgents[0]))
} else {
// For multiple user agents, we need to use a more complex expression
// This is a limitation - we can't easily combine any for user agents with all for path
// So we'll create separate rules for each user agent
for _, ua := range userAgents {
if ua == "*" {
continue // Skip wildcard as it's handled separately
}
ruleCounter++
subRule := AnubisRule{
Name: fmt.Sprintf("%s-disallow-%d", *policyName, ruleCounter),
Action: *baseAction,
Expression: &config.ExpressionOrList{
All: []string{
fmt.Sprintf("userAgent.contains(%q)", ua),
buildPathCondition(disallow),
},
},
}
anubisRules = append(anubisRules, subRule)
}
continue
}
// Add path condition
@@ -291,7 +363,6 @@ func convertToAnubisRules(robotsRules []RobotsRule) []AnubisRule {
anubisRules = append(anubisRules, rule)
}
}
return anubisRules

View File

@@ -22,9 +22,9 @@ type TestCase struct {
type TestOptions struct {
format string
action string
crawlDelayWeight int
policyName string
deniedAction string
crawlDelayWeight int
}
func TestDataFileConversion(t *testing.T) {
@@ -78,6 +78,12 @@ func TestDataFileConversion(t *testing.T) {
expectedFile: "complex.yaml",
options: TestOptions{format: "yaml", crawlDelayWeight: 5},
},
{
name: "consecutive_user_agents",
robotsFile: "consecutive.robots.txt",
expectedFile: "consecutive.yaml",
options: TestOptions{format: "yaml", crawlDelayWeight: 3},
},
}
for _, tc := range testCases {

View File

@@ -25,6 +25,6 @@
- action: CHALLENGE
expression:
all:
- userAgent.contains("Googlebot")
- path.startsWith("/search")
name: robots-txt-policy-disallow-7
- userAgent.contains("Googlebot")
- path.startsWith("/search")
name: robots-txt-policy-disallow-7

View File

@@ -20,8 +20,8 @@
- action: CHALLENGE
expression:
all:
- userAgent.contains("Googlebot")
- path.startsWith("/search/")
- userAgent.contains("Googlebot")
- path.startsWith("/search/")
name: robots-txt-policy-disallow-6
- action: WEIGH
expression: userAgent.contains("Bingbot")
@@ -31,14 +31,14 @@
- action: CHALLENGE
expression:
all:
- userAgent.contains("Bingbot")
- path.startsWith("/search/")
- userAgent.contains("Bingbot")
- path.startsWith("/search/")
name: robots-txt-policy-disallow-8
- action: CHALLENGE
expression:
all:
- userAgent.contains("Bingbot")
- path.startsWith("/admin/")
- userAgent.contains("Bingbot")
- path.startsWith("/admin/")
name: robots-txt-policy-disallow-9
- action: DENY
expression: userAgent.contains("BadBot")
@@ -54,18 +54,18 @@
- action: CHALLENGE
expression:
all:
- userAgent.contains("TestBot")
- path.matches("^/.*/admin")
- userAgent.contains("TestBot")
- path.matches("^/.*/admin")
name: robots-txt-policy-disallow-13
- action: CHALLENGE
expression:
all:
- userAgent.contains("TestBot")
- path.matches("^/temp.*\\.html")
- userAgent.contains("TestBot")
- path.matches("^/temp.*\\.html")
name: robots-txt-policy-disallow-14
- action: CHALLENGE
expression:
all:
- userAgent.contains("TestBot")
- path.matches("^/file.\\.log")
- userAgent.contains("TestBot")
- path.matches("^/file.\\.log")
name: robots-txt-policy-disallow-15

View File

@@ -0,0 +1,25 @@
# Test consecutive user agents that should be grouped into any: blocks
User-agent: *
Disallow: /admin
Crawl-delay: 10
# Multiple consecutive user agents - should be grouped
User-agent: BadBot
User-agent: SpamBot
User-agent: EvilBot
Disallow: /
# Single user agent - should be separate
User-agent: GoodBot
Disallow: /private
# Multiple consecutive user agents with crawl delay
User-agent: SlowBot1
User-agent: SlowBot2
Crawl-delay: 5
# Multiple consecutive user agents with specific path
User-agent: SearchBot1
User-agent: SearchBot2
User-agent: SearchBot3
Disallow: /search

View File

@@ -0,0 +1,47 @@
- action: WEIGH
expression: "true"
name: robots-txt-policy-crawl-delay-1
weight:
adjust: 3
- action: CHALLENGE
expression: path.startsWith("/admin")
name: robots-txt-policy-disallow-2
- action: DENY
expression:
any:
- userAgent.contains("BadBot")
- userAgent.contains("SpamBot")
- userAgent.contains("EvilBot")
name: robots-txt-policy-blacklist-3
- action: CHALLENGE
expression:
all:
- userAgent.contains("GoodBot")
- path.startsWith("/private")
name: robots-txt-policy-disallow-4
- action: WEIGH
expression:
any:
- userAgent.contains("SlowBot1")
- userAgent.contains("SlowBot2")
name: robots-txt-policy-crawl-delay-5
weight:
adjust: 3
- action: CHALLENGE
expression:
all:
- userAgent.contains("SearchBot1")
- path.startsWith("/search")
name: robots-txt-policy-disallow-7
- action: CHALLENGE
expression:
all:
- userAgent.contains("SearchBot2")
- path.startsWith("/search")
name: robots-txt-policy-disallow-8
- action: CHALLENGE
expression:
all:
- userAgent.contains("SearchBot3")
- path.startsWith("/search")
name: robots-txt-policy-disallow-9

View File

@@ -1,12 +1,12 @@
[
{
"action": "CHALLENGE",
"expression": "path.startsWith(\"/admin/\")",
"name": "robots-txt-policy-disallow-1"
"name": "robots-txt-policy-disallow-1",
"action": "CHALLENGE"
},
{
"action": "CHALLENGE",
"expression": "path.startsWith(\"/private\")",
"name": "robots-txt-policy-disallow-2"
"name": "robots-txt-policy-disallow-2",
"action": "CHALLENGE"
}
]

View File

@@ -3,5 +3,6 @@
- name: qualys-ssl-labs
action: ALLOW
remote_addresses:
- 64.41.200.0/24
- 2600:C02:1020:4202::/64
- 69.67.183.0/24
- 2600:C02:1020:4202::/64
- 2602:fdaa:c6:2::/64

View File

@@ -1,29 +0,0 @@
{
"bots": [
{
"import": "(data)/bots/_deny-pathological.yaml"
},
{
"import": "(data)/meta/ai-block-aggressive.yaml"
},
{
"import": "(data)/crawlers/_allow-good.yaml"
},
{
"import": "(data)/bots/aggressive-brazilian-scrapers.yaml"
},
{
"import": "(data)/common/keep-internet-working.yaml"
},
{
"name": "generic-browser",
"user_agent_regex": "Mozilla|Opera",
"action": "CHALLENGE"
}
],
"dnsbl": false,
"status_codes": {
"CHALLENGE": 200,
"DENY": 200
}
}

View File

@@ -11,9 +11,12 @@
## /usr/share/docs/anubis/data or in the tarball you extracted Anubis from.
bots:
# You can import the entire default config with this macro:
# - import: (data)/meta/default-config.yaml
# Pathological bots to deny
- # This correlates to data/bots/deny-pathological.yaml in the source tree
# https://github.com/TecharoHQ/anubis/blob/main/data/bots/deny-pathological.yaml
- # This correlates to data/bots/_deny-pathological.yaml in the source tree
# https://github.com/TecharoHQ/anubis/blob/main/data/bots/_deny-pathological.yaml
import: (data)/bots/_deny-pathological.yaml
- import: (data)/bots/aggressive-brazilian-scrapers.yaml
@@ -47,8 +50,7 @@ bots:
# user_agent_regex: (?i:bot|crawler)
# action: CHALLENGE
# challenge:
# difficulty: 16 # impossible
# report_as: 4 # lie to the operator
# difficulty: 16 # impossible
# algorithm: slow # intentionally waste CPU cycles and time
# Requires a subscription to Thoth to use, see
@@ -74,6 +76,25 @@ bots:
weight:
adjust: 10
# ## System load based checks.
# # If the system is under high load, add weight.
# - name: high-load-average
# action: WEIGH
# expression: load_1m >= 10.0 # make sure to end the load comparison in a .0
# weight:
# adjust: 20
## If your backend service is running on the same operating system as Anubis,
## you can uncomment this rule to make the challenge easier when the system is
## under low load.
##
## If it is not, remove weight.
# - name: low-load-average
# action: WEIGH
# expression: load_15m <= 4.0 # make sure to end the load comparison in a .0
# weight:
# adjust: -10
# Generic catchall rule
- name: generic-browser
user_agent_regex: >-
@@ -88,7 +109,7 @@ dnsbl: false
# impressum:
# # Displayed at the bottom of every page rendered by Anubis.
# footer: >-
# This website is hosted by Zonbocom. If you have any complaints or notes
# This website is hosted by Zombocom. If you have any complaints or notes
# about the service, please contact
# <a href="mailto:contact@domainhere.example">contact@domainhere.example</a>
# and we will assist you as soon as possible.
@@ -145,6 +166,14 @@ status_codes:
CHALLENGE: 200
DENY: 200
# Anubis can store temporary data in one of a few backends. See the storage
# backends section of the docs for more information:
#
# https://anubis.techaro.lol/docs/admin/policies#storage-backends
store:
backend: memory
parameters: {}
# The weight thresholds for when to trigger individual challenges. Any
# CHALLENGE will take precedence over this.
#
@@ -175,7 +204,6 @@ thresholds:
# https://anubis.techaro.lol/docs/admin/configuration/challenges/metarefresh
algorithm: metarefresh
difficulty: 1
report_as: 1
# For clients that are browser-like but have either gained points from custom rules or
# report as a standard browser.
- name: moderate-suspicion
@@ -188,13 +216,21 @@ thresholds:
# https://anubis.techaro.lol/docs/admin/configuration/challenges/proof-of-work
algorithm: fast
difficulty: 2 # two leading zeros, very fast for most clients
report_as: 2
# For clients that are browser like and have gained many points from custom rules
- name: extreme-suspicion
expression: weight >= 20
- name: mild-proof-of-work
expression:
all:
- weight >= 20
- weight < 30
action: CHALLENGE
challenge:
# https://anubis.techaro.lol/docs/admin/configuration/challenges/proof-of-work
algorithm: fast
difficulty: 4
report_as: 4
# For clients that are browser like and have gained many points from custom rules
- name: extreme-suspicion
expression: weight >= 30
action: CHALLENGE
challenge:
# https://anubis.techaro.lol/docs/admin/configuration/challenges/proof-of-work
algorithm: fast
difficulty: 6

View File

@@ -1,3 +1,6 @@
- import: (data)/bots/cloudflare-workers.yaml
- import: (data)/bots/headless-browsers.yaml
- import: (data)/bots/us-ai-scraper.yaml
- import: (data)/bots/us-ai-scraper.yaml
- import: (data)/bots/custom-async-http-client.yaml
- import: (data)/crawlers/alibaba-cloud.yaml
- import: (data)/crawlers/huawei-cloud.yaml

View File

@@ -7,5 +7,5 @@
# Warning: May contain user agents that _must_ be blocked in robots.txt, or the opt-out will have no effect.
- name: "ai-catchall"
user_agent_regex: >-
AI2Bot|Ai2Bot-Dolma|aiHitBot|Amazonbot|anthropic-ai|Brightbot 1.0|Bytespider|CCBot|Claude-Web|cohere-ai|cohere-training-data-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|FacebookBot|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google-CloudVertexBot|GoogleOther|GoogleOther-Image|GoogleOther-Video|iaskspider/2.0|ICC-Crawler|ImagesiftBot|img2dataset|imgproxy|ISSCyberRiskCrawler|Kangaroo Bot|meta-externalagent|Meta-ExternalAgent|meta-externalfetcher|Meta-ExternalFetcher|NovaAct|omgili|omgilibot|Operator|PanguBot|Perplexity-User|PerplexityBot|PetalBot|QualifiedBot|Scrapy|SemrushBot-OCOB|SemrushBot-SWA|Sidetrade indexer bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio-Extended|wpbot|YouBot
AI2Bot|Ai2Bot-Dolma|aiHitBot|Amazonbot|anthropic-ai|Brightbot 1.0|Bytespider|Claude-Web|cohere-ai|cohere-training-data-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|FacebookBot|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google-CloudVertexBot|GoogleOther|GoogleOther-Image|GoogleOther-Video|iaskspider/2.0|ICC-Crawler|ImagesiftBot|img2dataset|imgproxy|ISSCyberRiskCrawler|Kangaroo Bot|meta-externalagent|Meta-ExternalAgent|meta-externalfetcher|Meta-ExternalFetcher|NovaAct|omgili|omgilibot|Operator|PanguBot|Perplexity-User|PerplexityBot|PetalBot|QualifiedBot|Scrapy|SemrushBot-OCOB|SemrushBot-SWA|Sidetrade indexer bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio-Extended|wpbot|YouBot
action: DENY

View File

@@ -1,6 +1,8 @@
# Warning: Contains user agents that _must_ be blocked in robots.txt, or the opt-out will have no effect.
# Note: Blocks human-directed/non-training user agents
#
# CCBot is allowed because if Common Crawl is allowed, then scrapers don't need to scrape to get the data.
- name: "ai-robots-txt"
user_agent_regex: >-
AI2Bot|Ai2Bot-Dolma|aiHitBot|Amazonbot|Andibot|anthropic-ai|Applebot|Applebot-Extended|bedrockbot|Brightbot 1.0|Bytespider|CCBot|ChatGPT-User|Claude-SearchBot|Claude-User|Claude-Web|ClaudeBot|cohere-ai|cohere-training-data-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google-CloudVertexBot|Google-Extended|GoogleOther|GoogleOther-Image|GoogleOther-Video|GPTBot|iaskspider/2.0|ICC-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo Bot|meta-externalagent|Meta-ExternalAgent|meta-externalfetcher|Meta-ExternalFetcher|MistralAI-User/1.0|MyCentralAIScraperBot|NovaAct|OAI-SearchBot|omgili|omgilibot|Operator|PanguBot|Panscient|panscient.com|Perplexity-User|PerplexityBot|PetalBot|PhindBot|Poseidon Research Crawler|QualifiedBot|QuillBot|quillbot.com|SBIntuitionsBot|Scrapy|SemrushBot|SemrushBot-BA|SemrushBot-CT|SemrushBot-OCOB|SemrushBot-SI|SemrushBot-SWA|Sidetrade indexer bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio-Extended|wpbot|YandexAdditional|YandexAdditionalBot|YouBot
AddSearchBot|AI2Bot|Ai2Bot-Dolma|aiHitBot|Amazonbot|Andibot|anthropic-ai|Applebot|Applebot-Extended|Awario|bedrockbot|bigsur.ai|Brightbot 1.0|Bytespider|CCBot|ChatGPT Agent|ChatGPT-User|Claude-SearchBot|Claude-User|Claude-Web|ClaudeBot|CloudVertexBot|cohere-ai|cohere-training-data-crawler|Cotoyogi|Crawlspace|Datenbank Crawler|Devin|Diffbot|DuckAssistBot|Echobot Bot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Gemini-Deep-Research|Google-CloudVertexBot|Google-Extended|GoogleAgent-Mariner|GoogleOther|GoogleOther-Image|GoogleOther-Video|GPTBot|iaskspider/2.0|ICC-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo Bot|LinerBot|meta-externalagent|Meta-ExternalAgent|meta-externalfetcher|Meta-ExternalFetcher|MistralAI-User|MistralAI-User/1.0|MyCentralAIScraperBot|netEstate Imprint Crawler|NovaAct|OAI-SearchBot|omgili|omgilibot|OpenAI|Operator|PanguBot|Panscient|panscient.com|Perplexity-User|PerplexityBot|PetalBot|PhindBot|Poseidon Research Crawler|QualifiedBot|QuillBot|quillbot.com|SBIntuitionsBot|Scrapy|SemrushBot-OCOB|SemrushBot-SWA|Sidetrade indexer bot|Thinkbot|TikTokSpider|Timpibot|VelenPublicWebCrawler|WARDBot|Webzio-Extended|wpbot|YaK|YandexAdditional|YandexAdditionalBot|YouBot
action: DENY

View File

@@ -0,0 +1,5 @@
- name: "custom-async-http-client"
user_agent_regex: "Custom-AsyncHttpClient"
action: WEIGH
weight:
adjust: 10

View File

@@ -0,0 +1,60 @@
- name: allow-docker-client
action: ALLOW
expression:
all:
- path.startsWith("/v2/")
- userAgent.contains("docker/")
- userAgent.contains("git-commit/")
- '"Accept" in headers'
- headers["Accept"].contains("vnd.docker.distribution")
- '"Baggage" in headers'
- headers["Baggage"].contains("trigger")
- name: allow-crane-client
action: ALLOW
expression:
all:
- userAgent.contains("crane/")
- userAgent.contains("go-containerregistry/")
- name: allow-docker-distribution-api-client
action: ALLOW
expression:
all:
- '"Docker-Distribution-Api-Version" in headers'
- '!(userAgent.contains("Mozilla"))'
- name: allow-go-containerregistry-client
action: ALLOW
expression:
all:
- path.startsWith("/v2/")
- userAgent.contains("go-containerregistry/")
- name: allow-buildah
action: ALLOW
expression:
all:
- path.startsWith("/v2/")
- userAgent.contains("Buildah/")
- name: allow-podman
action: ALLOW
expression:
all:
- path.startsWith("/v2/")
- userAgent.contains("containers/")
- name: allow-containerd
action: ALLOW
expression:
all:
- path.startsWith("/v2/")
- userAgent.contains("containerd/")
- name: allow-renovate
action: ALLOW
expression:
all:
- path.startsWith("/v2/")
- userAgent.contains("Renovate/")

View File

@@ -2,13 +2,19 @@
action: ALLOW
expression:
all:
- >
(
userAgent.startsWith("git/") ||
userAgent.contains("libgit") ||
userAgent.startsWith("go-git") ||
userAgent.startsWith("JGit/") ||
userAgent.startsWith("JGit-")
)
- '"Git-Protocol" in headers'
- headers["Git-Protocol"] == "version=2"
- >
(
userAgent.startsWith("git/") ||
userAgent.contains("libgit") ||
userAgent.startsWith("go-git") ||
userAgent.startsWith("JGit/") ||
userAgent.startsWith("JGit-")
)
- '"Accept" in headers'
- headers["Accept"] == "*/*"
- '"Cache-Control" in headers'
- headers["Cache-Control"] == "no-cache"
- '"Pragma" in headers'
- headers["Pragma"] == "no-cache"
- '"Accept-Encoding" in headers'
- headers["Accept-Encoding"].contains("gzip")

View File

@@ -0,0 +1,6 @@
- name: telegrambot
action: ALLOW
expression:
all:
- userAgent.matches("TelegramBot")
- verifyFCrDNS(remoteAddress, "ptr\\.telegram\\.org$")

View File

@@ -0,0 +1,6 @@
- name: vkbot
action: ALLOW
expression:
all:
- userAgent.matches("vkShare[^+]+\\+http\\://vk\\.com/dev/Share")
- verifyFCrDNS(remoteAddress, "^snipster\\d+\\.go\\.mail\\.ru$")

View File

@@ -1,13 +1,13 @@
# Common "keeping the internet working" routes
- name: well-known
path_regex: ^/.well-known/.*$
path_regex: ^/\.well-known/.*$
action: ALLOW
- name: favicon
path_regex: ^/favicon.ico$
path_regex: ^/favicon\.(?:ico|png|gif|jpg|jpeg|svg)$
action: ALLOW
- name: robots-txt
path_regex: ^/robots.txt$
path_regex: ^/robots\.txt$
action: ALLOW
- name: sitemap
path_regex: ^/sitemap.xml$
action: ALLOW
path_regex: ^/sitemap\.xml$
action: ALLOW

View File

@@ -6,4 +6,6 @@
- import: (data)/crawlers/internet-archive.yaml
- import: (data)/crawlers/kagibot.yaml
- import: (data)/crawlers/marginalia.yaml
- import: (data)/crawlers/mojeekbot.yaml
- import: (data)/crawlers/mojeekbot.yaml
- import: (data)/crawlers/commoncrawl.yaml
- import: (data)/crawlers/yandexbot.yaml

View File

@@ -0,0 +1,881 @@
- name: alibaba-cloud
action: DENY
# Updated 2025-08-20 from IP addresses for AS45102
remote_addresses:
- 103.81.186.0/23
- 110.76.21.0/24
- 110.76.23.0/24
- 116.251.64.0/18
- 139.95.0.0/23
- 139.95.10.0/23
- 139.95.12.0/23
- 139.95.14.0/23
- 139.95.16.0/23
- 139.95.18.0/23
- 139.95.2.0/23
- 139.95.4.0/23
- 139.95.6.0/23
- 139.95.64.0/24
- 139.95.8.0/23
- 14.1.112.0/22
- 14.1.115.0/24
- 140.205.1.0/24
- 140.205.122.0/24
- 147.139.0.0/17
- 147.139.0.0/18
- 147.139.128.0/17
- 147.139.128.0/18
- 147.139.155.0/24
- 147.139.192.0/18
- 147.139.64.0/18
- 149.129.0.0/20
- 149.129.0.0/21
- 149.129.16.0/23
- 149.129.192.0/18
- 149.129.192.0/19
- 149.129.224.0/19
- 149.129.32.0/19
- 149.129.64.0/18
- 149.129.64.0/19
- 149.129.8.0/21
- 149.129.96.0/19
- 156.227.20.0/24
- 156.236.12.0/24
- 156.236.17.0/24
- 156.240.76.0/23
- 156.245.1.0/24
- 161.117.0.0/16
- 161.117.0.0/17
- 161.117.126.0/24
- 161.117.127.0/24
- 161.117.128.0/17
- 161.117.128.0/24
- 161.117.129.0/24
- 161.117.138.0/24
- 161.117.143.0/24
- 170.33.104.0/24
- 170.33.105.0/24
- 170.33.106.0/24
- 170.33.107.0/24
- 170.33.136.0/24
- 170.33.137.0/24
- 170.33.138.0/24
- 170.33.20.0/24
- 170.33.21.0/24
- 170.33.22.0/24
- 170.33.23.0/24
- 170.33.24.0/24
- 170.33.29.0/24
- 170.33.30.0/24
- 170.33.31.0/24
- 170.33.32.0/24
- 170.33.33.0/24
- 170.33.34.0/24
- 170.33.35.0/24
- 170.33.64.0/24
- 170.33.65.0/24
- 170.33.66.0/24
- 170.33.68.0/24
- 170.33.69.0/24
- 170.33.72.0/24
- 170.33.73.0/24
- 170.33.76.0/24
- 170.33.77.0/24
- 170.33.78.0/24
- 170.33.79.0/24
- 170.33.80.0/24
- 170.33.81.0/24
- 170.33.82.0/24
- 170.33.83.0/24
- 170.33.84.0/24
- 170.33.85.0/24
- 170.33.86.0/24
- 170.33.88.0/24
- 170.33.90.0/24
- 170.33.92.0/24
- 170.33.93.0/24
- 185.78.106.0/23
- 198.11.128.0/18
- 198.11.137.0/24
- 198.11.184.0/21
- 202.144.199.0/24
- 203.107.64.0/24
- 203.107.65.0/24
- 203.107.66.0/24
- 203.107.67.0/24
- 203.107.68.0/24
- 205.204.102.0/23
- 205.204.111.0/24
- 205.204.117.0/24
- 205.204.125.0/24
- 205.204.96.0/19
- 223.5.5.0/24
- 223.6.6.0/24
- 2400:3200::/48
- 2400:3200:baba::/48
- 2400:b200:4100::/48
- 2400:b200:4101::/48
- 2400:b200:4102::/48
- 2400:b200:4103::/48
- 2401:8680:4100::/48
- 2401:b180:4100::/48
- 2404:2280:1000::/36
- 2404:2280:1000::/37
- 2404:2280:1800::/37
- 2404:2280:2000::/36
- 2404:2280:2000::/37
- 2404:2280:2800::/37
- 2404:2280:3000::/36
- 2404:2280:3000::/37
- 2404:2280:3800::/37
- 2404:2280:4000::/36
- 2404:2280:4000::/37
- 2404:2280:4800::/37
- 2408:4000:1000::/48
- 2408:4009:500::/48
- 240b:4000::/32
- 240b:4000::/33
- 240b:4000:8000::/33
- 240b:4000:fffe::/48
- 240b:4001::/32
- 240b:4001::/33
- 240b:4001:8000::/33
- 240b:4002::/32
- 240b:4002::/33
- 240b:4002:8000::/33
- 240b:4004::/32
- 240b:4004::/33
- 240b:4004:8000::/33
- 240b:4005::/32
- 240b:4005::/33
- 240b:4005:8000::/33
- 240b:4006::/48
- 240b:4006:1000::/44
- 240b:4006:1000::/45
- 240b:4006:1000::/47
- 240b:4006:1002::/47
- 240b:4006:1008::/45
- 240b:4006:1010::/44
- 240b:4006:1010::/45
- 240b:4006:1018::/45
- 240b:4006:1020::/44
- 240b:4006:1020::/45
- 240b:4006:1028::/45
- 240b:4007::/32
- 240b:4007::/33
- 240b:4007:8000::/33
- 240b:4009::/32
- 240b:4009::/33
- 240b:4009:8000::/33
- 240b:400b::/32
- 240b:400b::/33
- 240b:400b:8000::/33
- 240b:400c::/32
- 240b:400c::/33
- 240b:400c::/40
- 240b:400c::/41
- 240b:400c:100::/40
- 240b:400c:100::/41
- 240b:400c:180::/41
- 240b:400c:80::/41
- 240b:400c:8000::/33
- 240b:400c:f00::/48
- 240b:400c:f01::/48
- 240b:400c:ffff::/48
- 240b:400d::/32
- 240b:400d::/33
- 240b:400d:8000::/33
- 240b:400e::/32
- 240b:400e::/33
- 240b:400e:8000::/33
- 240b:400f::/32
- 240b:400f::/33
- 240b:400f:8000::/33
- 240b:4011::/32
- 240b:4011::/33
- 240b:4011:8000::/33
- 240b:4012::/48
- 240b:4013::/32
- 240b:4013::/33
- 240b:4013:8000::/33
- 240b:4014::/32
- 240b:4014::/33
- 240b:4014:8000::/33
- 43.100.0.0/15
- 43.100.0.0/16
- 43.101.0.0/16
- 43.102.0.0/20
- 43.102.112.0/20
- 43.102.16.0/20
- 43.102.32.0/20
- 43.102.48.0/20
- 43.102.64.0/20
- 43.102.80.0/20
- 43.102.96.0/20
- 43.103.0.0/17
- 43.103.0.0/18
- 43.103.64.0/18
- 43.104.0.0/15
- 43.104.0.0/16
- 43.105.0.0/16
- 43.108.0.0/17
- 43.108.0.0/18
- 43.108.64.0/18
- 43.91.0.0/16
- 43.91.0.0/17
- 43.91.128.0/17
- 43.96.10.0/24
- 43.96.100.0/24
- 43.96.101.0/24
- 43.96.102.0/24
- 43.96.104.0/24
- 43.96.11.0/24
- 43.96.20.0/24
- 43.96.21.0/24
- 43.96.23.0/24
- 43.96.24.0/24
- 43.96.25.0/24
- 43.96.3.0/24
- 43.96.32.0/24
- 43.96.33.0/24
- 43.96.34.0/24
- 43.96.35.0/24
- 43.96.4.0/24
- 43.96.40.0/24
- 43.96.5.0/24
- 43.96.52.0/24
- 43.96.6.0/24
- 43.96.66.0/24
- 43.96.67.0/24
- 43.96.68.0/24
- 43.96.69.0/24
- 43.96.7.0/24
- 43.96.70.0/24
- 43.96.71.0/24
- 43.96.72.0/24
- 43.96.73.0/24
- 43.96.74.0/24
- 43.96.75.0/24
- 43.96.8.0/24
- 43.96.80.0/24
- 43.96.81.0/24
- 43.96.84.0/24
- 43.96.85.0/24
- 43.96.86.0/24
- 43.96.88.0/24
- 43.96.9.0/24
- 43.96.96.0/24
- 43.98.0.0/16
- 43.98.0.0/17
- 43.98.128.0/17
- 43.99.0.0/16
- 43.99.0.0/17
- 43.99.128.0/17
- 45.199.179.0/24
- 47.235.0.0/22
- 47.235.0.0/23
- 47.235.1.0/24
- 47.235.10.0/23
- 47.235.10.0/24
- 47.235.11.0/24
- 47.235.12.0/23
- 47.235.12.0/24
- 47.235.13.0/24
- 47.235.16.0/23
- 47.235.16.0/24
- 47.235.18.0/23
- 47.235.18.0/24
- 47.235.19.0/24
- 47.235.2.0/23
- 47.235.20.0/24
- 47.235.21.0/24
- 47.235.22.0/24
- 47.235.23.0/24
- 47.235.24.0/22
- 47.235.24.0/23
- 47.235.26.0/23
- 47.235.28.0/23
- 47.235.28.0/24
- 47.235.29.0/24
- 47.235.30.0/24
- 47.235.31.0/24
- 47.235.4.0/24
- 47.235.5.0/24
- 47.235.6.0/23
- 47.235.6.0/24
- 47.235.7.0/24
- 47.235.8.0/24
- 47.235.9.0/24
- 47.236.0.0/15
- 47.236.0.0/16
- 47.237.0.0/16
- 47.237.32.0/20
- 47.237.34.0/24
- 47.238.0.0/15
- 47.238.0.0/16
- 47.239.0.0/16
- 47.240.0.0/16
- 47.240.0.0/17
- 47.240.128.0/17
- 47.241.0.0/16
- 47.241.0.0/17
- 47.241.128.0/17
- 47.242.0.0/15
- 47.242.0.0/16
- 47.243.0.0/16
- 47.244.0.0/16
- 47.244.0.0/17
- 47.244.128.0/17
- 47.244.73.0/24
- 47.245.0.0/18
- 47.245.0.0/19
- 47.245.128.0/17
- 47.245.128.0/18
- 47.245.192.0/18
- 47.245.32.0/19
- 47.245.64.0/18
- 47.245.64.0/19
- 47.245.96.0/19
- 47.246.100.0/22
- 47.246.104.0/21
- 47.246.104.0/22
- 47.246.108.0/22
- 47.246.120.0/24
- 47.246.122.0/24
- 47.246.123.0/24
- 47.246.124.0/24
- 47.246.125.0/24
- 47.246.128.0/22
- 47.246.128.0/23
- 47.246.130.0/23
- 47.246.132.0/22
- 47.246.132.0/23
- 47.246.134.0/23
- 47.246.136.0/21
- 47.246.136.0/22
- 47.246.140.0/22
- 47.246.144.0/23
- 47.246.144.0/24
- 47.246.145.0/24
- 47.246.146.0/23
- 47.246.146.0/24
- 47.246.147.0/24
- 47.246.150.0/23
- 47.246.150.0/24
- 47.246.151.0/24
- 47.246.152.0/23
- 47.246.152.0/24
- 47.246.153.0/24
- 47.246.154.0/24
- 47.246.155.0/24
- 47.246.156.0/22
- 47.246.156.0/23
- 47.246.158.0/23
- 47.246.160.0/20
- 47.246.160.0/21
- 47.246.168.0/21
- 47.246.176.0/20
- 47.246.176.0/21
- 47.246.184.0/21
- 47.246.192.0/22
- 47.246.192.0/23
- 47.246.194.0/23
- 47.246.196.0/22
- 47.246.196.0/23
- 47.246.198.0/23
- 47.246.32.0/22
- 47.246.66.0/24
- 47.246.67.0/24
- 47.246.68.0/23
- 47.246.68.0/24
- 47.246.69.0/24
- 47.246.72.0/21
- 47.246.72.0/22
- 47.246.76.0/22
- 47.246.80.0/24
- 47.246.82.0/23
- 47.246.82.0/24
- 47.246.83.0/24
- 47.246.84.0/22
- 47.246.84.0/23
- 47.246.86.0/23
- 47.246.88.0/22
- 47.246.88.0/23
- 47.246.90.0/23
- 47.246.92.0/23
- 47.246.92.0/24
- 47.246.93.0/24
- 47.246.96.0/21
- 47.246.96.0/22
- 47.250.0.0/17
- 47.250.0.0/18
- 47.250.128.0/17
- 47.250.128.0/18
- 47.250.192.0/18
- 47.250.64.0/18
- 47.250.99.0/24
- 47.251.0.0/16
- 47.251.0.0/17
- 47.251.128.0/17
- 47.251.224.0/22
- 47.252.0.0/17
- 47.252.0.0/18
- 47.252.128.0/17
- 47.252.128.0/18
- 47.252.192.0/18
- 47.252.64.0/18
- 47.252.67.0/24
- 47.253.0.0/16
- 47.253.0.0/17
- 47.253.128.0/17
- 47.254.0.0/17
- 47.254.0.0/18
- 47.254.113.0/24
- 47.254.128.0/18
- 47.254.128.0/19
- 47.254.160.0/19
- 47.254.192.0/18
- 47.254.192.0/19
- 47.254.224.0/19
- 47.254.64.0/18
- 47.52.0.0/16
- 47.52.0.0/17
- 47.52.128.0/17
- 47.56.0.0/15
- 47.56.0.0/16
- 47.57.0.0/16
- 47.74.0.0/18
- 47.74.0.0/19
- 47.74.0.0/21
- 47.74.128.0/17
- 47.74.128.0/18
- 47.74.192.0/18
- 47.74.32.0/19
- 47.74.64.0/18
- 47.74.64.0/19
- 47.74.96.0/19
- 47.75.0.0/16
- 47.75.0.0/17
- 47.75.128.0/17
- 47.76.0.0/16
- 47.76.0.0/17
- 47.76.128.0/17
- 47.77.0.0/22
- 47.77.0.0/23
- 47.77.104.0/21
- 47.77.12.0/22
- 47.77.128.0/17
- 47.77.128.0/18
- 47.77.128.0/21
- 47.77.136.0/21
- 47.77.144.0/21
- 47.77.152.0/21
- 47.77.16.0/21
- 47.77.16.0/22
- 47.77.192.0/18
- 47.77.2.0/23
- 47.77.20.0/22
- 47.77.24.0/22
- 47.77.24.0/23
- 47.77.26.0/23
- 47.77.32.0/19
- 47.77.32.0/20
- 47.77.4.0/22
- 47.77.4.0/23
- 47.77.48.0/20
- 47.77.6.0/23
- 47.77.64.0/19
- 47.77.64.0/20
- 47.77.8.0/21
- 47.77.8.0/22
- 47.77.80.0/20
- 47.77.96.0/20
- 47.77.96.0/21
- 47.78.0.0/17
- 47.78.128.0/17
- 47.79.0.0/20
- 47.79.0.0/21
- 47.79.104.0/21
- 47.79.112.0/20
- 47.79.128.0/19
- 47.79.128.0/20
- 47.79.144.0/20
- 47.79.16.0/20
- 47.79.16.0/21
- 47.79.192.0/18
- 47.79.192.0/19
- 47.79.224.0/19
- 47.79.24.0/21
- 47.79.32.0/20
- 47.79.32.0/21
- 47.79.40.0/21
- 47.79.48.0/20
- 47.79.48.0/21
- 47.79.52.0/23
- 47.79.54.0/23
- 47.79.56.0/21
- 47.79.56.0/23
- 47.79.58.0/23
- 47.79.60.0/23
- 47.79.62.0/23
- 47.79.64.0/20
- 47.79.64.0/21
- 47.79.72.0/21
- 47.79.8.0/21
- 47.79.80.0/20
- 47.79.80.0/21
- 47.79.83.0/24
- 47.79.88.0/21
- 47.79.96.0/19
- 47.79.96.0/20
- 47.80.0.0/18
- 47.80.0.0/19
- 47.80.128.0/17
- 47.80.128.0/18
- 47.80.192.0/18
- 47.80.32.0/19
- 47.80.64.0/18
- 47.80.64.0/19
- 47.80.96.0/19
- 47.81.0.0/18
- 47.81.0.0/19
- 47.81.128.0/17
- 47.81.128.0/18
- 47.81.192.0/18
- 47.81.32.0/19
- 47.81.64.0/18
- 47.81.64.0/19
- 47.81.96.0/19
- 47.82.0.0/18
- 47.82.0.0/19
- 47.82.10.0/23
- 47.82.12.0/23
- 47.82.128.0/17
- 47.82.128.0/18
- 47.82.14.0/23
- 47.82.192.0/18
- 47.82.32.0/19
- 47.82.32.0/21
- 47.82.40.0/21
- 47.82.48.0/21
- 47.82.56.0/21
- 47.82.64.0/18
- 47.82.64.0/19
- 47.82.8.0/23
- 47.82.96.0/19
- 47.83.0.0/16
- 47.83.0.0/17
- 47.83.128.0/17
- 47.83.32.0/21
- 47.83.40.0/21
- 47.83.48.0/21
- 47.83.56.0/21
- 47.84.0.0/16
- 47.84.0.0/17
- 47.84.128.0/17
- 47.84.144.0/21
- 47.84.152.0/21
- 47.84.160.0/21
- 47.84.168.0/21
- 47.85.0.0/16
- 47.85.0.0/17
- 47.85.112.0/22
- 47.85.112.0/23
- 47.85.114.0/23
- 47.85.128.0/17
- 47.86.0.0/16
- 47.86.0.0/17
- 47.86.128.0/17
- 47.87.0.0/18
- 47.87.0.0/19
- 47.87.128.0/18
- 47.87.128.0/19
- 47.87.160.0/19
- 47.87.192.0/22
- 47.87.192.0/23
- 47.87.194.0/23
- 47.87.196.0/22
- 47.87.196.0/23
- 47.87.198.0/23
- 47.87.200.0/22
- 47.87.200.0/23
- 47.87.202.0/23
- 47.87.204.0/22
- 47.87.204.0/23
- 47.87.206.0/23
- 47.87.208.0/22
- 47.87.208.0/23
- 47.87.210.0/23
- 47.87.212.0/22
- 47.87.212.0/23
- 47.87.214.0/23
- 47.87.216.0/22
- 47.87.216.0/23
- 47.87.218.0/23
- 47.87.220.0/22
- 47.87.220.0/23
- 47.87.222.0/23
- 47.87.224.0/22
- 47.87.224.0/23
- 47.87.226.0/23
- 47.87.228.0/22
- 47.87.228.0/23
- 47.87.230.0/23
- 47.87.232.0/22
- 47.87.232.0/23
- 47.87.234.0/23
- 47.87.236.0/22
- 47.87.236.0/23
- 47.87.238.0/23
- 47.87.240.0/22
- 47.87.240.0/23
- 47.87.242.0/23
- 47.87.32.0/19
- 47.87.64.0/18
- 47.87.64.0/19
- 47.87.96.0/19
- 47.88.0.0/17
- 47.88.0.0/18
- 47.88.109.0/24
- 47.88.128.0/17
- 47.88.128.0/18
- 47.88.135.0/24
- 47.88.192.0/18
- 47.88.41.0/24
- 47.88.42.0/24
- 47.88.43.0/24
- 47.88.64.0/18
- 47.89.0.0/18
- 47.89.0.0/19
- 47.89.100.0/24
- 47.89.101.0/24
- 47.89.102.0/24
- 47.89.103.0/24
- 47.89.104.0/21
- 47.89.104.0/22
- 47.89.108.0/22
- 47.89.122.0/24
- 47.89.123.0/24
- 47.89.124.0/23
- 47.89.124.0/24
- 47.89.125.0/24
- 47.89.128.0/18
- 47.89.128.0/19
- 47.89.160.0/19
- 47.89.192.0/18
- 47.89.192.0/19
- 47.89.221.0/24
- 47.89.224.0/19
- 47.89.32.0/19
- 47.89.72.0/22
- 47.89.72.0/23
- 47.89.74.0/23
- 47.89.76.0/22
- 47.89.76.0/23
- 47.89.78.0/23
- 47.89.80.0/23
- 47.89.82.0/23
- 47.89.84.0/24
- 47.89.88.0/22
- 47.89.88.0/23
- 47.89.90.0/23
- 47.89.92.0/22
- 47.89.92.0/23
- 47.89.94.0/23
- 47.89.96.0/24
- 47.89.97.0/24
- 47.89.98.0/23
- 47.89.99.0/24
- 47.90.0.0/17
- 47.90.0.0/18
- 47.90.128.0/17
- 47.90.128.0/18
- 47.90.172.0/24
- 47.90.173.0/24
- 47.90.174.0/24
- 47.90.175.0/24
- 47.90.192.0/18
- 47.90.64.0/18
- 47.91.0.0/19
- 47.91.0.0/20
- 47.91.112.0/20
- 47.91.128.0/17
- 47.91.128.0/18
- 47.91.16.0/20
- 47.91.192.0/18
- 47.91.32.0/19
- 47.91.32.0/20
- 47.91.48.0/20
- 47.91.64.0/19
- 47.91.64.0/20
- 47.91.80.0/20
- 47.91.96.0/19
- 47.91.96.0/20
- 5.181.224.0/23
- 59.82.136.0/23
- 8.208.0.0/16
- 8.208.0.0/17
- 8.208.0.0/18
- 8.208.0.0/19
- 8.208.128.0/17
- 8.208.141.0/24
- 8.208.32.0/19
- 8.209.0.0/19
- 8.209.0.0/20
- 8.209.128.0/18
- 8.209.128.0/19
- 8.209.16.0/20
- 8.209.160.0/19
- 8.209.192.0/18
- 8.209.192.0/19
- 8.209.224.0/19
- 8.209.36.0/23
- 8.209.36.0/24
- 8.209.37.0/24
- 8.209.38.0/23
- 8.209.38.0/24
- 8.209.39.0/24
- 8.209.40.0/22
- 8.209.40.0/23
- 8.209.42.0/23
- 8.209.44.0/22
- 8.209.44.0/23
- 8.209.46.0/23
- 8.209.48.0/20
- 8.209.48.0/21
- 8.209.56.0/21
- 8.209.64.0/18
- 8.209.64.0/19
- 8.209.96.0/19
- 8.210.0.0/16
- 8.210.0.0/17
- 8.210.128.0/17
- 8.210.240.0/24
- 8.211.0.0/17
- 8.211.0.0/18
- 8.211.104.0/21
- 8.211.128.0/18
- 8.211.128.0/19
- 8.211.160.0/19
- 8.211.192.0/18
- 8.211.192.0/19
- 8.211.224.0/19
- 8.211.226.0/24
- 8.211.64.0/18
- 8.211.80.0/21
- 8.211.88.0/21
- 8.211.96.0/21
- 8.212.0.0/17
- 8.212.0.0/18
- 8.212.128.0/18
- 8.212.128.0/19
- 8.212.160.0/19
- 8.212.192.0/18
- 8.212.192.0/19
- 8.212.224.0/19
- 8.212.64.0/18
- 8.213.0.0/17
- 8.213.0.0/18
- 8.213.128.0/19
- 8.213.128.0/20
- 8.213.144.0/20
- 8.213.160.0/21
- 8.213.160.0/22
- 8.213.164.0/22
- 8.213.176.0/20
- 8.213.176.0/21
- 8.213.184.0/21
- 8.213.192.0/18
- 8.213.192.0/19
- 8.213.224.0/19
- 8.213.251.0/24
- 8.213.252.0/24
- 8.213.253.0/24
- 8.213.64.0/18
- 8.214.0.0/16
- 8.214.0.0/17
- 8.214.128.0/17
- 8.215.0.0/16
- 8.215.0.0/17
- 8.215.128.0/17
- 8.215.160.0/24
- 8.215.162.0/23
- 8.215.168.0/24
- 8.215.169.0/24
- 8.216.0.0/17
- 8.216.0.0/18
- 8.216.128.0/17
- 8.216.128.0/18
- 8.216.148.0/24
- 8.216.192.0/18
- 8.216.64.0/18
- 8.216.69.0/24
- 8.216.74.0/24
- 8.217.0.0/16
- 8.217.0.0/17
- 8.217.128.0/17
- 8.218.0.0/16
- 8.218.0.0/17
- 8.218.128.0/17
- 8.219.0.0/16
- 8.219.0.0/17
- 8.219.128.0/17
- 8.219.40.0/21
- 8.220.116.0/24
- 8.220.128.0/18
- 8.220.128.0/19
- 8.220.147.0/24
- 8.220.160.0/19
- 8.220.192.0/18
- 8.220.192.0/19
- 8.220.224.0/19
- 8.220.229.0/24
- 8.220.64.0/18
- 8.220.64.0/19
- 8.220.96.0/19
- 8.221.0.0/17
- 8.221.0.0/18
- 8.221.0.0/21
- 8.221.128.0/17
- 8.221.128.0/18
- 8.221.184.0/22
- 8.221.188.0/22
- 8.221.192.0/18
- 8.221.192.0/21
- 8.221.200.0/21
- 8.221.208.0/21
- 8.221.216.0/21
- 8.221.48.0/21
- 8.221.56.0/21
- 8.221.64.0/18
- 8.221.8.0/21
- 8.222.0.0/20
- 8.222.0.0/21
- 8.222.112.0/20
- 8.222.128.0/17
- 8.222.128.0/18
- 8.222.16.0/20
- 8.222.16.0/21
- 8.222.192.0/18
- 8.222.24.0/21
- 8.222.32.0/20
- 8.222.32.0/21
- 8.222.40.0/21
- 8.222.48.0/20
- 8.222.48.0/21
- 8.222.56.0/21
- 8.222.64.0/20
- 8.222.64.0/21
- 8.222.72.0/21
- 8.222.8.0/21
- 8.222.80.0/20
- 8.222.80.0/21
- 8.222.88.0/21
- 8.222.96.0/19
- 8.222.96.0/20
- 8.223.0.0/17
- 8.223.0.0/18
- 8.223.128.0/17
- 8.223.128.0/18
- 8.223.192.0/18
- 8.223.64.0/18

View File

@@ -0,0 +1,12 @@
- name: common-crawl
user_agent_regex: CCBot
action: ALLOW
# https://index.commoncrawl.org/ccbot.json
remote_addresses:
[
"2600:1f28:365:80b0::/60",
"18.97.9.168/29",
"18.97.14.80/29",
"18.97.14.88/30",
"98.85.178.216/32",
]

View File

@@ -0,0 +1,617 @@
- name: huawei-cloud
action: DENY
# Updated 2025-08-20 from IP addresses for AS136907
remote_addresses:
- 1.178.32.0/20
- 1.178.48.0/20
- 101.44.0.0/20
- 101.44.144.0/20
- 101.44.16.0/20
- 101.44.160.0/20
- 101.44.173.0/24
- 101.44.176.0/20
- 101.44.192.0/20
- 101.44.208.0/22
- 101.44.212.0/22
- 101.44.216.0/22
- 101.44.220.0/22
- 101.44.224.0/22
- 101.44.228.0/22
- 101.44.232.0/22
- 101.44.236.0/22
- 101.44.240.0/22
- 101.44.244.0/22
- 101.44.248.0/22
- 101.44.252.0/24
- 101.44.253.0/24
- 101.44.254.0/24
- 101.44.255.0/24
- 101.44.32.0/20
- 101.44.48.0/20
- 101.44.64.0/20
- 101.44.80.0/20
- 101.44.96.0/20
- 101.46.0.0/20
- 101.46.128.0/21
- 101.46.136.0/21
- 101.46.144.0/21
- 101.46.152.0/21
- 101.46.160.0/21
- 101.46.168.0/21
- 101.46.176.0/21
- 101.46.184.0/21
- 101.46.192.0/21
- 101.46.200.0/21
- 101.46.208.0/21
- 101.46.216.0/21
- 101.46.224.0/22
- 101.46.232.0/22
- 101.46.236.0/22
- 101.46.240.0/22
- 101.46.244.0/22
- 101.46.248.0/22
- 101.46.252.0/24
- 101.46.253.0/24
- 101.46.254.0/24
- 101.46.255.0/24
- 101.46.32.0/20
- 101.46.48.0/20
- 101.46.64.0/20
- 101.46.80.0/20
- 103.198.203.0/24
- 103.215.0.0/24
- 103.215.1.0/24
- 103.215.3.0/24
- 103.240.156.0/22
- 103.240.157.0/24
- 103.255.60.0/22
- 103.255.60.0/24
- 103.255.61.0/24
- 103.255.62.0/24
- 103.255.63.0/24
- 103.40.100.0/23
- 103.84.110.0/24
- 110.238.100.0/22
- 110.238.104.0/21
- 110.238.112.0/21
- 110.238.120.0/22
- 110.238.124.0/22
- 110.238.64.0/21
- 110.238.72.0/21
- 110.238.80.0/20
- 110.238.96.0/24
- 110.238.98.0/24
- 110.238.99.0/24
- 110.239.127.0/24
- 110.239.184.0/22
- 110.239.188.0/23
- 110.239.190.0/23
- 110.239.64.0/19
- 110.239.96.0/19
- 110.41.208.0/24
- 110.41.209.0/24
- 110.41.210.0/24
- 111.119.192.0/20
- 111.119.208.0/20
- 111.119.224.0/20
- 111.119.240.0/20
- 111.91.0.0/20
- 111.91.112.0/20
- 111.91.16.0/20
- 111.91.32.0/20
- 111.91.48.0/20
- 111.91.64.0/20
- 111.91.80.0/20
- 111.91.96.0/20
- 114.119.128.0/19
- 114.119.160.0/21
- 114.119.168.0/24
- 114.119.169.0/24
- 114.119.170.0/24
- 114.119.171.0/24
- 114.119.172.0/22
- 114.119.176.0/20
- 115.30.32.0/20
- 115.30.48.0/20
- 119.12.160.0/20
- 119.13.112.0/20
- 119.13.160.0/24
- 119.13.161.0/24
- 119.13.162.0/23
- 119.13.163.0/24
- 119.13.164.0/22
- 119.13.168.0/21
- 119.13.168.0/24
- 119.13.169.0/24
- 119.13.170.0/24
- 119.13.172.0/24
- 119.13.173.0/24
- 119.13.32.0/22
- 119.13.36.0/22
- 119.13.64.0/24
- 119.13.65.0/24
- 119.13.66.0/23
- 119.13.68.0/22
- 119.13.72.0/22
- 119.13.76.0/22
- 119.13.80.0/21
- 119.13.88.0/22
- 119.13.92.0/22
- 119.13.96.0/20
- 119.8.0.0/21
- 119.8.128.0/24
- 119.8.129.0/24
- 119.8.130.0/23
- 119.8.132.0/22
- 119.8.136.0/21
- 119.8.144.0/20
- 119.8.160.0/19
- 119.8.18.0/24
- 119.8.192.0/20
- 119.8.192.0/21
- 119.8.200.0/21
- 119.8.208.0/20
- 119.8.21.0/24
- 119.8.22.0/24
- 119.8.224.0/24
- 119.8.227.0/24
- 119.8.228.0/22
- 119.8.23.0/24
- 119.8.232.0/21
- 119.8.24.0/21
- 119.8.240.0/23
- 119.8.242.0/23
- 119.8.244.0/24
- 119.8.245.0/24
- 119.8.246.0/24
- 119.8.247.0/24
- 119.8.248.0/24
- 119.8.249.0/24
- 119.8.250.0/24
- 119.8.253.0/24
- 119.8.254.0/23
- 119.8.32.0/19
- 119.8.4.0/24
- 119.8.64.0/22
- 119.8.68.0/24
- 119.8.69.0/24
- 119.8.70.0/24
- 119.8.71.0/24
- 119.8.72.0/21
- 119.8.8.0/21
- 119.8.80.0/20
- 119.8.96.0/19
- 121.91.152.0/21
- 121.91.168.0/21
- 121.91.200.0/21
- 121.91.200.0/24
- 121.91.201.0/24
- 121.91.204.0/24
- 121.91.205.0/24
- 122.8.128.0/20
- 122.8.144.0/20
- 122.8.160.0/20
- 122.8.176.0/21
- 122.8.184.0/22
- 122.8.188.0/22
- 124.243.128.0/18
- 124.243.156.0/24
- 124.243.157.0/24
- 124.243.158.0/24
- 124.243.159.0/24
- 124.71.248.0/24
- 124.71.249.0/24
- 124.71.250.0/24
- 124.71.252.0/24
- 124.71.253.0/24
- 124.81.0.0/20
- 124.81.112.0/20
- 124.81.128.0/20
- 124.81.144.0/20
- 124.81.16.0/20
- 124.81.160.0/20
- 124.81.176.0/20
- 124.81.192.0/20
- 124.81.208.0/20
- 124.81.224.0/20
- 124.81.240.0/20
- 124.81.32.0/20
- 124.81.48.0/20
- 124.81.64.0/20
- 124.81.80.0/20
- 124.81.96.0/20
- 139.9.98.0/24
- 139.9.99.0/24
- 14.137.132.0/22
- 14.137.136.0/22
- 14.137.140.0/22
- 14.137.152.0/24
- 14.137.153.0/24
- 14.137.154.0/24
- 14.137.155.0/24
- 14.137.156.0/24
- 14.137.157.0/24
- 14.137.161.0/24
- 14.137.163.0/24
- 14.137.169.0/24
- 14.137.170.0/23
- 14.137.172.0/22
- 146.174.128.0/20
- 146.174.144.0/20
- 146.174.160.0/20
- 146.174.176.0/20
- 148.145.160.0/20
- 148.145.192.0/20
- 148.145.208.0/20
- 148.145.224.0/23
- 148.145.234.0/23
- 148.145.236.0/23
- 148.145.238.0/23
- 149.232.128.0/20
- 149.232.144.0/20
- 150.40.128.0/20
- 150.40.144.0/20
- 150.40.160.0/20
- 150.40.176.0/20
- 150.40.182.0/24
- 150.40.192.0/20
- 150.40.208.0/20
- 150.40.224.0/20
- 150.40.240.0/20
- 154.220.192.0/19
- 154.81.16.0/20
- 154.83.0.0/23
- 154.86.32.0/20
- 154.86.48.0/20
- 154.93.100.0/23
- 154.93.104.0/23
- 156.227.22.0/23
- 156.230.32.0/21
- 156.230.40.0/21
- 156.230.64.0/18
- 156.232.16.0/20
- 156.240.128.0/18
- 156.249.32.0/20
- 156.253.16.0/20
- 157.254.211.0/24
- 157.254.212.0/24
- 159.138.0.0/20
- 159.138.112.0/21
- 159.138.114.0/24
- 159.138.120.0/22
- 159.138.124.0/24
- 159.138.125.0/24
- 159.138.126.0/23
- 159.138.128.0/20
- 159.138.144.0/20
- 159.138.152.0/21
- 159.138.16.0/22
- 159.138.160.0/20
- 159.138.176.0/23
- 159.138.178.0/24
- 159.138.179.0/24
- 159.138.180.0/24
- 159.138.181.0/24
- 159.138.182.0/23
- 159.138.188.0/23
- 159.138.190.0/23
- 159.138.192.0/20
- 159.138.20.0/22
- 159.138.208.0/21
- 159.138.216.0/22
- 159.138.220.0/23
- 159.138.224.0/20
- 159.138.24.0/21
- 159.138.240.0/20
- 159.138.32.0/20
- 159.138.48.0/20
- 159.138.64.0/21
- 159.138.67.0/24
- 159.138.76.0/24
- 159.138.77.0/24
- 159.138.78.0/24
- 159.138.79.0/24
- 159.138.80.0/20
- 159.138.96.0/20
- 166.108.192.0/20
- 166.108.208.0/20
- 166.108.224.0/20
- 166.108.240.0/20
- 176.52.128.0/20
- 176.52.144.0/20
- 180.87.192.0/20
- 180.87.208.0/20
- 180.87.224.0/20
- 180.87.240.0/20
- 182.160.0.0/20
- 182.160.16.0/24
- 182.160.17.0/24
- 182.160.18.0/23
- 182.160.20.0/22
- 182.160.20.0/24
- 182.160.24.0/21
- 182.160.36.0/22
- 182.160.49.0/24
- 182.160.52.0/22
- 182.160.56.0/21
- 182.160.56.0/24
- 182.160.57.0/24
- 182.160.58.0/24
- 182.160.59.0/24
- 182.160.60.0/24
- 182.160.61.0/24
- 182.160.62.0/24
- 183.87.112.0/20
- 183.87.128.0/20
- 183.87.144.0/20
- 183.87.32.0/20
- 183.87.48.0/20
- 183.87.64.0/20
- 183.87.80.0/20
- 183.87.96.0/20
- 188.119.192.0/20
- 188.119.208.0/20
- 188.119.224.0/20
- 188.119.240.0/20
- 188.239.0.0/20
- 188.239.16.0/20
- 188.239.32.0/20
- 188.239.48.0/20
- 189.1.192.0/20
- 189.1.208.0/20
- 189.1.224.0/20
- 189.1.240.0/20
- 189.28.112.0/20
- 189.28.96.0/20
- 190.92.192.0/19
- 190.92.224.0/19
- 190.92.248.0/24
- 190.92.252.0/24
- 190.92.253.0/24
- 190.92.254.0/24
- 201.77.32.0/20
- 202.170.88.0/21
- 202.76.128.0/20
- 202.76.144.0/20
- 202.76.160.0/20
- 202.76.176.0/20
- 203.123.80.0/20
- 203.167.20.0/23
- 203.167.22.0/24
- 212.34.192.0/20
- 212.34.208.0/20
- 213.250.128.0/20
- 213.250.144.0/20
- 213.250.160.0/20
- 213.250.176.0/21
- 213.250.184.0/21
- 219.83.0.0/20
- 219.83.112.0/22
- 219.83.116.0/23
- 219.83.118.0/23
- 219.83.121.0/24
- 219.83.122.0/24
- 219.83.123.0/24
- 219.83.124.0/24
- 219.83.16.0/20
- 219.83.32.0/20
- 219.83.76.0/23
- 2404:a140:43::/48
- 2405:f080::/39
- 2405:f080:1::/48
- 2405:f080:1000::/39
- 2405:f080:1200::/39
- 2405:f080:1400::/48
- 2405:f080:1401::/48
- 2405:f080:1402::/48
- 2405:f080:1403::/48
- 2405:f080:1500::/40
- 2405:f080:1600::/48
- 2405:f080:1602::/48
- 2405:f080:1603::/48
- 2405:f080:1800::/39
- 2405:f080:1800::/44
- 2405:f080:1810::/48
- 2405:f080:1811::/48
- 2405:f080:1812::/48
- 2405:f080:1813::/48
- 2405:f080:1814::/48
- 2405:f080:1815::/48
- 2405:f080:1900::/40
- 2405:f080:1e02::/47
- 2405:f080:1e04::/47
- 2405:f080:1e06::/47
- 2405:f080:1e1e::/47
- 2405:f080:1e20::/47
- 2405:f080:200::/48
- 2405:f080:2000::/39
- 2405:f080:201::/48
- 2405:f080:202::/48
- 2405:f080:2040::/48
- 2405:f080:2200::/39
- 2405:f080:2280::/48
- 2405:f080:2281::/48
- 2405:f080:2282::/48
- 2405:f080:2283::/48
- 2405:f080:2284::/48
- 2405:f080:2285::/48
- 2405:f080:2286::/48
- 2405:f080:2287::/48
- 2405:f080:2288::/48
- 2405:f080:2289::/48
- 2405:f080:228a::/48
- 2405:f080:228b::/48
- 2405:f080:228c::/48
- 2405:f080:228d::/48
- 2405:f080:228e::/48
- 2405:f080:228f::/48
- 2405:f080:2400::/39
- 2405:f080:2600::/39
- 2405:f080:2800::/48
- 2405:f080:2a00::/48
- 2405:f080:2e00::/47
- 2405:f080:3000::/38
- 2405:f080:3000::/40
- 2405:f080:3100::/40
- 2405:f080:3200::/48
- 2405:f080:3201::/48
- 2405:f080:3202::/48
- 2405:f080:3203::/48
- 2405:f080:3204::/48
- 2405:f080:3205::/48
- 2405:f080:3400::/38
- 2405:f080:3400::/40
- 2405:f080:3500::/40
- 2405:f080:3600::/48
- 2405:f080:3601::/48
- 2405:f080:3602::/48
- 2405:f080:3603::/48
- 2405:f080:3604::/48
- 2405:f080:3605::/48
- 2405:f080:400::/39
- 2405:f080:4000::/40
- 2405:f080:4100::/48
- 2405:f080:4102::/48
- 2405:f080:4103::/48
- 2405:f080:4104::/48
- 2405:f080:4200::/40
- 2405:f080:4300::/40
- 2405:f080:600::/48
- 2405:f080:800::/40
- 2405:f080:810::/44
- 2405:f080:a00::/39
- 2405:f080:a11::/48
- 2405:f080:e02::/48
- 2405:f080:e03::/48
- 2405:f080:e04::/47
- 2405:f080:e05::/48
- 2405:f080:e06::/48
- 2405:f080:e07::/48
- 2405:f080:e0e::/47
- 2405:f080:e10::/47
- 2405:f080:edff::/48
- 27.106.0.0/20
- 27.106.112.0/20
- 27.106.16.0/20
- 27.106.32.0/20
- 27.106.48.0/20
- 27.106.64.0/20
- 27.106.80.0/20
- 27.106.96.0/20
- 27.255.0.0/23
- 27.255.10.0/23
- 27.255.12.0/23
- 27.255.14.0/23
- 27.255.16.0/23
- 27.255.18.0/23
- 27.255.2.0/23
- 27.255.20.0/23
- 27.255.22.0/23
- 27.255.26.0/23
- 27.255.28.0/23
- 27.255.30.0/23
- 27.255.32.0/23
- 27.255.34.0/23
- 27.255.36.0/23
- 27.255.38.0/23
- 27.255.4.0/23
- 27.255.40.0/23
- 27.255.42.0/23
- 27.255.44.0/23
- 27.255.46.0/23
- 27.255.48.0/23
- 27.255.50.0/23
- 27.255.52.0/23
- 27.255.54.0/23
- 27.255.58.0/23
- 27.255.6.0/23
- 27.255.60.0/23
- 27.255.62.0/23
- 27.255.8.0/23
- 42.201.128.0/20
- 42.201.144.0/20
- 42.201.160.0/20
- 42.201.176.0/20
- 42.201.192.0/20
- 42.201.208.0/20
- 42.201.224.0/20
- 42.201.240.0/20
- 43.225.140.0/22
- 43.255.104.0/22
- 45.194.104.0/21
- 45.199.144.0/22
- 45.202.128.0/19
- 45.202.160.0/20
- 45.202.176.0/21
- 45.202.184.0/21
- 45.203.40.0/21
- 46.250.160.0/20
- 46.250.176.0/20
- 49.0.192.0/21
- 49.0.200.0/21
- 49.0.224.0/22
- 49.0.228.0/22
- 49.0.232.0/21
- 49.0.240.0/20
- 62.245.0.0/20
- 62.245.16.0/20
- 80.238.128.0/22
- 80.238.132.0/22
- 80.238.136.0/22
- 80.238.140.0/22
- 80.238.144.0/22
- 80.238.148.0/22
- 80.238.152.0/22
- 80.238.156.0/22
- 80.238.164.0/22
- 80.238.164.0/24
- 80.238.165.0/24
- 80.238.168.0/22
- 80.238.168.0/24
- 80.238.169.0/24
- 80.238.170.0/24
- 80.238.171.0/24
- 80.238.172.0/22
- 80.238.176.0/22
- 80.238.180.0/24
- 80.238.181.0/24
- 80.238.183.0/24
- 80.238.184.0/24
- 80.238.185.0/24
- 80.238.186.0/24
- 80.238.190.0/24
- 80.238.192.0/20
- 80.238.208.0/20
- 80.238.224.0/20
- 80.238.240.0/20
- 83.101.0.0/21
- 83.101.104.0/21
- 83.101.16.0/21
- 83.101.24.0/21
- 83.101.32.0/21
- 83.101.48.0/21
- 83.101.56.0/23
- 83.101.58.0/23
- 83.101.64.0/21
- 83.101.72.0/21
- 83.101.8.0/23
- 83.101.80.0/21
- 83.101.88.0/24
- 83.101.89.0/24
- 83.101.96.0/21
- 87.119.12.0/24
- 89.150.192.0/20
- 89.150.208.0/20
- 94.244.128.0/20
- 94.244.144.0/20
- 94.244.160.0/20
- 94.244.176.0/20
- 94.45.160.0/19
- 94.45.160.0/24
- 94.45.161.0/24
- 94.45.163.0/24
- 94.74.112.0/21
- 94.74.120.0/21
- 94.74.64.0/20
- 94.74.80.0/20
- 94.74.96.0/20

View File

@@ -0,0 +1,165 @@
# Tencent Cloud crawler IP ranges
- name: tencent-cloud
action: DENY
remote_addresses:
- 101.32.0.0/17
- 101.32.176.0/20
- 101.32.192.0/18
- 101.33.116.0/22
- 101.33.120.0/21
- 101.33.16.0/20
- 101.33.2.0/23
- 101.33.32.0/19
- 101.33.4.0/22
- 101.33.64.0/19
- 101.33.8.0/21
- 101.33.96.0/20
- 119.28.28.0/24
- 119.29.29.0/24
- 124.156.0.0/16
- 129.226.0.0/18
- 129.226.128.0/18
- 129.226.224.0/19
- 129.226.96.0/19
- 150.109.0.0/18
- 150.109.128.0/20
- 150.109.160.0/19
- 150.109.192.0/18
- 150.109.64.0/20
- 150.109.80.0/21
- 150.109.88.0/22
- 150.109.96.0/19
- 162.14.60.0/22
- 162.62.0.0/18
- 162.62.128.0/20
- 162.62.144.0/21
- 162.62.152.0/22
- 162.62.172.0/22
- 162.62.176.0/20
- 162.62.192.0/19
- 162.62.255.0/24
- 162.62.80.0/20
- 162.62.96.0/19
- 170.106.0.0/16
- 43.128.0.0/14
- 43.132.0.0/22
- 43.132.12.0/22
- 43.132.128.0/17
- 43.132.16.0/22
- 43.132.28.0/22
- 43.132.32.0/22
- 43.132.40.0/22
- 43.132.52.0/22
- 43.132.60.0/24
- 43.132.64.0/22
- 43.132.69.0/24
- 43.132.70.0/23
- 43.132.72.0/21
- 43.132.80.0/21
- 43.132.88.0/22
- 43.132.92.0/23
- 43.132.96.0/19
- 43.133.0.0/16
- 43.134.0.0/16
- 43.135.0.0/17
- 43.135.128.0/18
- 43.135.192.0/19
- 43.152.0.0/21
- 43.152.11.0/24
- 43.152.12.0/22
- 43.152.128.0/22
- 43.152.133.0/24
- 43.152.134.0/23
- 43.152.136.0/21
- 43.152.144.0/20
- 43.152.160.0/22
- 43.152.16.0/21
- 43.152.164.0/23
- 43.152.166.0/24
- 43.152.168.0/21
- 43.152.178.0/23
- 43.152.180.0/22
- 43.152.184.0/21
- 43.152.192.0/18
- 43.152.24.0/22
- 43.152.31.0/24
- 43.152.32.0/23
- 43.152.35.0/24
- 43.152.36.0/22
- 43.152.40.0/21
- 43.152.48.0/20
- 43.152.74.0/23
- 43.152.76.0/22
- 43.152.80.0/22
- 43.152.8.0/23
- 43.152.92.0/23
- 43.153.0.0/16
- 43.154.0.0/15
- 43.156.0.0/15
- 43.158.0.0/16
- 43.159.0.0/20
- 43.159.128.0/17
- 43.159.64.0/23
- 43.159.70.0/23
- 43.159.72.0/21
- 43.159.81.0/24
- 43.159.82.0/23
- 43.159.85.0/24
- 43.159.86.0/23
- 43.159.88.0/21
- 43.159.96.0/19
- 43.160.0.0/15
- 43.162.0.0/16
- 43.163.0.0/17
- 43.163.128.0/18
- 43.163.192.255/32
- 43.163.193.0/24
- 43.163.194.0/23
- 43.163.196.0/22
- 43.163.200.0/21
- 43.163.208.0/20
- 43.163.224.0/19
- 43.164.0.0/18
- 43.164.128.0/17
- 43.165.0.0/16
- 43.166.128.0/18
- 43.166.224.0/19
- 43.168.0.0/20
- 43.168.16.0/21
- 43.168.24.0/22
- 43.168.255.0/24
- 43.168.32.0/19
- 43.168.64.0/20
- 43.168.80.0/22
- 43.169.0.0/16
- 43.170.0.0/16
- 43.174.0.0/18
- 43.174.128.0/17
- 43.174.64.0/22
- 43.174.68.0/23
- 43.174.71.0/24
- 43.174.74.0/23
- 43.174.76.0/22
- 43.174.80.0/20
- 43.174.96.0/19
- 43.175.0.0/20
- 43.175.113.0/24
- 43.175.114.0/23
- 43.175.116.0/22
- 43.175.120.0/21
- 43.175.128.0/18
- 43.175.16.0/22
- 43.175.192.0/20
- 43.175.20.0/23
- 43.175.208.0/21
- 43.175.216.0/22
- 43.175.220.0/23
- 43.175.22.0/24
- 43.175.222.0/24
- 43.175.224.0/20
- 43.175.25.0/24
- 43.175.26.0/23
- 43.175.28.0/22
- 43.175.32.0/19
- 43.175.64.0/19
- 43.175.96.0/20

View File

@@ -0,0 +1,6 @@
- name: yandexbot
action: ALLOW
expression:
all:
- userAgent.matches("\\+http\\://yandex\\.com/bots")
- verifyFCrDNS(remoteAddress, "^.*\\.yandex\\.(ru|com|net)$")

View File

@@ -3,6 +3,6 @@ package data
import "embed"
var (
//go:embed botPolicies.yaml botPolicies.json all:apps all:bots all:clients all:common all:crawlers all:meta
//go:embed botPolicies.yaml all:apps all:bots all:clients all:common all:crawlers all:meta all:services
BotPolicies embed.FS
)

38
data/embed_test.go Normal file
View File

@@ -0,0 +1,38 @@
package data
import (
"path/filepath"
"strings"
"testing"
)
// TestBotPoliciesEmbed ensures all YAML files in the directory tree
// are accessible in the embedded BotPolicies filesystem.
func TestBotPoliciesEmbed(t *testing.T) {
yamlFiles, err := filepath.Glob("./**/*.yaml")
if err != nil {
t.Fatalf("Failed to glob YAML files: %v", err)
}
if len(yamlFiles) == 0 {
t.Fatal("No YAML files found in directory tree")
}
t.Logf("Found %d YAML files to verify", len(yamlFiles))
for _, filePath := range yamlFiles {
embeddedPath := strings.TrimPrefix(filePath, "./")
t.Run(embeddedPath, func(t *testing.T) {
content, err := BotPolicies.ReadFile(embeddedPath)
if err != nil {
t.Errorf("Failed to read %s from embedded filesystem: %v", embeddedPath, err)
return
}
if len(content) == 0 {
t.Errorf("File %s exists in embedded filesystem but is empty", embeddedPath)
}
})
}
}

View File

@@ -0,0 +1,88 @@
- # Pathological bots to deny
# This correlates to data/bots/_deny-pathological.yaml in the source tree
# https://github.com/TecharoHQ/anubis/blob/main/data/bots/_deny-pathological.yaml
import: (data)/bots/_deny-pathological.yaml
- import: (data)/bots/aggressive-brazilian-scrapers.yaml
# Aggressively block AI/LLM related bots/agents by default
- import: (data)/meta/ai-block-aggressive.yaml
# Consider replacing the aggressive AI policy with more selective policies:
# - import: (data)/meta/ai-block-moderate.yaml
# - import: (data)/meta/ai-block-permissive.yaml
# Search engine crawlers to allow, defaults to:
# - Google (so they don't try to bypass Anubis)
# - Apple
# - Bing
# - DuckDuckGo
# - Qwant
# - The Internet Archive
# - Kagi
# - Marginalia
# - Mojeek
- import: (data)/crawlers/_allow-good.yaml
# Challenge Firefox AI previews
- import: (data)/clients/x-firefox-ai.yaml
# Allow common "keeping the internet working" routes (well-known, favicon, robots.txt)
- import: (data)/common/keep-internet-working.yaml
# # Punish any bot with "bot" in the user-agent string
# # This is known to have a high false-positive rate, use at your own risk
# - name: generic-bot-catchall
# user_agent_regex: (?i:bot|crawler)
# action: CHALLENGE
# challenge:
# difficulty: 16 # impossible
# algorithm: slow # intentionally waste CPU cycles and time
# Requires a subscription to Thoth to use, see
# https://anubis.techaro.lol/docs/admin/thoth#geoip-based-filtering
- name: countries-with-aggressive-scrapers
action: WEIGH
geoip:
countries:
- BR
- CN
weight:
adjust: 10
# Requires a subscription to Thoth to use, see
# https://anubis.techaro.lol/docs/admin/thoth#asn-based-filtering
- name: aggressive-asns-without-functional-abuse-contact
action: WEIGH
asns:
match:
- 13335 # Cloudflare
- 136907 # Huawei Cloud
- 45102 # Alibaba Cloud
weight:
adjust: 10
# ## System load based checks.
# # If the system is under high load, add weight.
# - name: high-load-average
# action: WEIGH
# expression: load_1m >= 10.0 # make sure to end the load comparison in a .0
# weight:
# adjust: 20
## If your backend service is running on the same operating system as Anubis,
## you can uncomment this rule to make the challenge easier when the system is
## under low load.
##
## If it is not, remove weight.
# - name: low-load-average
# action: WEIGH
# expression: load_15m <= 4.0 # make sure to end the load comparison in a .0
# weight:
# adjust: -10
# Generic catchall rule
- name: generic-browser
user_agent_regex: >-
Mozilla|Opera
action: WEIGH
weight:
adjust: 10

View File

@@ -0,0 +1,2 @@
- import: (data)/clients/telegram-preview.yaml
- import: (data)/clients/vk-preview.yaml

View File

@@ -0,0 +1,223 @@
- name: uptime-robot
user_agent_regex: UptimeRobot
action: ALLOW
# https://api.uptimerobot.com/meta/ips
remote_addresses: [
"3.12.251.153/32",
"3.20.63.178/32",
"3.77.67.4/32",
"3.79.134.69/32",
"3.105.133.239/32",
"3.105.190.221/32",
"3.133.226.214/32",
"3.149.57.90/32",
"3.212.128.62/32",
"5.161.61.238/32",
"5.161.73.160/32",
"5.161.75.7/32",
"5.161.113.195/32",
"5.161.117.52/32",
"5.161.177.47/32",
"5.161.194.92/32",
"5.161.215.244/32",
"5.223.43.32/32",
"5.223.53.147/32",
"5.223.57.22/32",
"18.116.205.62/32",
"18.180.208.214/32",
"18.192.166.72/32",
"18.193.252.127/32",
"24.144.78.39/32",
"24.144.78.185/32",
"34.198.201.66/32",
"45.55.123.175/32",
"45.55.127.146/32",
"49.13.24.81/32",
"49.13.130.29/32",
"49.13.134.145/32",
"49.13.164.148/32",
"49.13.167.123/32",
"52.15.147.27/32",
"52.22.236.30/32",
"52.28.162.93/32",
"52.59.43.236/32",
"52.87.72.16/32",
"54.64.67.106/32",
"54.79.28.129/32",
"54.87.112.51/32",
"54.167.223.174/32",
"54.249.170.27/32",
"63.178.84.147/32",
"64.225.81.248/32",
"64.225.82.147/32",
"69.162.124.227/32",
"69.162.124.235/32",
"69.162.124.238/32",
"78.46.190.63/32",
"78.46.215.1/32",
"78.47.98.55/32",
"78.47.173.76/32",
"88.99.80.227/32",
"91.99.101.207/32",
"128.140.41.193/32",
"128.140.106.114/32",
"129.212.132.140/32",
"134.199.240.137/32",
"138.197.53.117/32",
"138.197.53.138/32",
"138.197.54.143/32",
"138.197.54.247/32",
"138.197.63.92/32",
"139.59.50.44/32",
"142.132.180.39/32",
"143.198.249.237/32",
"143.198.250.89/32",
"143.244.196.21/32",
"143.244.196.211/32",
"143.244.221.177/32",
"144.126.251.21/32",
"146.190.9.187/32",
"152.42.149.135/32",
"157.90.155.240/32",
"157.90.156.63/32",
"159.69.158.189/32",
"159.223.243.219/32",
"161.35.247.201/32",
"167.99.18.52/32",
"167.235.143.113/32",
"168.119.53.160/32",
"168.119.96.239/32",
"168.119.123.75/32",
"170.64.250.64/32",
"170.64.250.132/32",
"170.64.250.235/32",
"178.156.181.172/32",
"178.156.184.20/32",
"178.156.185.127/32",
"178.156.185.231/32",
"178.156.187.238/32",
"178.156.189.113/32",
"178.156.189.249/32",
"188.166.201.79/32",
"206.189.241.133/32",
"209.38.49.1/32",
"209.38.49.206/32",
"209.38.49.226/32",
"209.38.51.43/32",
"209.38.53.7/32",
"209.38.124.252/32",
"216.144.248.18/31",
"216.144.248.21/32",
"216.144.248.22/31",
"216.144.248.24/30",
"216.144.248.28/31",
"216.144.248.30/32",
"216.245.221.83/32",
"2400:6180:10:200::56a0:b000/128",
"2400:6180:10:200::56a0:c000/128",
"2400:6180:10:200::56a0:e000/128",
"2400:6180:100:d0::94b6:4001/128",
"2400:6180:100:d0::94b6:5001/128",
"2400:6180:100:d0::94b6:7001/128",
"2406:da14:94d:8601:9d0d:7754:bedf:e4f5/128",
"2406:da14:94d:8601:b325:ff58:2bba:7934/128",
"2406:da14:94d:8601:db4b:c5ac:2cbe:9a79/128",
"2406:da1c:9c8:dc02:7ae1:f2ea:ab91:2fde/128",
"2406:da1c:9c8:dc02:7db9:f38b:7b9f:402e/128",
"2406:da1c:9c8:dc02:82b2:f0fd:ee96:579/128",
"2600:1f16:775:3a00:ac3:c5eb:7081:942e/128",
"2600:1f16:775:3a00:37bf:6026:e54a:f03a/128",
"2600:1f16:775:3a00:3f24:5bb0:95d7:5a6b/128",
"2600:1f16:775:3a00:8c2c:2ba6:778f:5be5/128",
"2600:1f16:775:3a00:91ac:3120:ff38:92b5/128",
"2600:1f16:775:3a00:dbbe:36b0:3c45:da32/128",
"2600:1f18:179:f900:71:af9a:ade7:d772/128",
"2600:1f18:179:f900:2406:9399:4ae6:c5d3/128",
"2600:1f18:179:f900:4696:7729:7bb3:f52f/128",
"2600:1f18:179:f900:4b7d:d1cc:2d10:211/128",
"2600:1f18:179:f900:5c68:91b6:5d75:5d7/128",
"2600:1f18:179:f900:e8dd:eed1:a6c:183b/128",
"2604:a880:800:14:0:1:68ba:d000/128",
"2604:a880:800:14:0:1:68ba:e000/128",
"2604:a880:800:14:0:1:68bb:0/128",
"2604:a880:800:14:0:1:68bb:1000/128",
"2604:a880:800:14:0:1:68bb:3000/128",
"2604:a880:800:14:0:1:68bb:4000/128",
"2604:a880:800:14:0:1:68bb:5000/128",
"2604:a880:800:14:0:1:68bb:6000/128",
"2604:a880:800:14:0:1:68bb:7000/128",
"2604:a880:800:14:0:1:68bb:a000/128",
"2604:a880:800:14:0:1:68bb:b000/128",
"2604:a880:800:14:0:1:68bb:c000/128",
"2604:a880:800:14:0:1:68bb:d000/128",
"2604:a880:800:14:0:1:68bb:e000/128",
"2604:a880:800:14:0:1:68bb:f000/128",
"2607:ff68:107::4/128",
"2607:ff68:107::14/128",
"2607:ff68:107::33/128",
"2607:ff68:107::48/127",
"2607:ff68:107::50/125",
"2607:ff68:107::58/127",
"2607:ff68:107::60/128",
"2a01:4f8:c0c:83fa::1/128",
"2a01:4f8:c17:42e4::1/128",
"2a01:4f8:c2c:9fc6::1/128",
"2a01:4f8:c2c:beae::1/128",
"2a01:4f8:1c1a:3d53::1/128",
"2a01:4f8:1c1b:4ef4::1/128",
"2a01:4f8:1c1b:5b5a::1/128",
"2a01:4f8:1c1b:7ecc::1/128",
"2a01:4f8:1c1c:11aa::1/128",
"2a01:4f8:1c1c:5353::1/128",
"2a01:4f8:1c1c:7240::1/128",
"2a01:4f8:1c1c:a98a::1/128",
"2a01:4f8:c012:c60e::1/128",
"2a01:4f8:c013:c18::1/128",
"2a01:4f8:c013:34c0::1/128",
"2a01:4f8:c013:3b0f::1/128",
"2a01:4f8:c013:3c52::1/128",
"2a01:4f8:c013:3c53::1/128",
"2a01:4f8:c013:3c54::1/128",
"2a01:4f8:c013:3c55::1/128",
"2a01:4f8:c013:3c56::1/128",
"2a01:4ff:f0:bfd::1/128",
"2a01:4ff:f0:2219::1/128",
"2a01:4ff:f0:3e03::1/128",
"2a01:4ff:f0:5f80::1/128",
"2a01:4ff:f0:7fad::1/128",
"2a01:4ff:f0:9c5f::1/128",
"2a01:4ff:f0:b2f2::1/128",
"2a01:4ff:f0:b6f1::1/128",
"2a01:4ff:f0:d283::1/128",
"2a01:4ff:f0:d3cd::1/128",
"2a01:4ff:f0:e516::1/128",
"2a01:4ff:f0:e9cf::1/128",
"2a01:4ff:f0:eccb::1/128",
"2a01:4ff:f0:efd1::1/128",
"2a01:4ff:f0:fdc7::1/128",
"2a01:4ff:2f0:193c::1/128",
"2a01:4ff:2f0:27de::1/128",
"2a01:4ff:2f0:3b3a::1/128",
"2a03:b0c0:2:f0::bd91:f001/128",
"2a03:b0c0:2:f0::bd92:1/128",
"2a03:b0c0:2:f0::bd92:1001/128",
"2a03:b0c0:2:f0::bd92:2001/128",
"2a03:b0c0:2:f0::bd92:4001/128",
"2a03:b0c0:2:f0::bd92:5001/128",
"2a03:b0c0:2:f0::bd92:6001/128",
"2a03:b0c0:2:f0::bd92:7001/128",
"2a03:b0c0:2:f0::bd92:8001/128",
"2a03:b0c0:2:f0::bd92:9001/128",
"2a03:b0c0:2:f0::bd92:a001/128",
"2a03:b0c0:2:f0::bd92:b001/128",
"2a03:b0c0:2:f0::bd92:c001/128",
"2a03:b0c0:2:f0::bd92:e001/128",
"2a03:b0c0:2:f0::bd92:f001/128",
"2a05:d014:1815:3400:6d:9235:c1c0:96ad/128",
"2a05:d014:1815:3400:654f:bd37:724c:212b/128",
"2a05:d014:1815:3400:90b4:4ef9:5631:b170/128",
"2a05:d014:1815:3400:9779:d8e9:100a:9642/128",
"2a05:d014:1815:3400:af29:e95e:64ff:df81/128",
"2a05:d014:1815:3400:c7d6:f7f3:6cc1:30d1/128",
"2a05:d014:1815:3400:d784:e5dd:8e0:67cb/128",
]

View File

@@ -13,7 +13,13 @@ func Zilch[T any]() T {
// Impl is a lazy key->value map. It's a wrapper around a map and a mutex. If values exceed their time-to-live, they are pruned at Get time.
type Impl[K comparable, V any] struct {
data map[K]decayMapEntry[V]
lock sync.RWMutex
// deleteCh receives decay-deletion requests from readers.
deleteCh chan deleteReq[K]
// stopCh stops the background cleanup worker.
stopCh chan struct{}
wg sync.WaitGroup
lock sync.RWMutex
}
type decayMapEntry[V any] struct {
@@ -21,33 +27,56 @@ type decayMapEntry[V any] struct {
expiry time.Time
}
// deleteReq is a request to remove a key if its expiry timestamp still matches
// the observed one. This prevents racing with concurrent Set updates.
type deleteReq[K comparable] struct {
key K
expiry time.Time
}
// New creates a new DecayMap of key type K and value type V.
//
// Key types must be comparable to work with maps.
func New[K comparable, V any]() *Impl[K, V] {
return &Impl[K, V]{
data: make(map[K]decayMapEntry[V]),
m := &Impl[K, V]{
data: make(map[K]decayMapEntry[V]),
deleteCh: make(chan deleteReq[K], 1024),
stopCh: make(chan struct{}),
}
m.wg.Add(1)
go m.cleanupWorker()
return m
}
// expire forcibly expires a key by setting its time-to-live one second in the past.
func (m *Impl[K, V]) expire(key K) bool {
m.lock.RLock()
// Use a single write lock to avoid RUnlock->Lock convoy.
m.lock.Lock()
defer m.lock.Unlock()
val, ok := m.data[key]
m.lock.RUnlock()
if !ok {
return false
}
m.lock.Lock()
val.expiry = time.Now().Add(-1 * time.Second)
m.data[key] = val
m.lock.Unlock()
return true
}
// Delete a value from the DecayMap by key.
//
// If the value does not exist, return false. Return true after
// deletion.
func (m *Impl[K, V]) Delete(key K) bool {
// Use a single write lock to avoid RUnlock->Lock convoy.
m.lock.Lock()
defer m.lock.Unlock()
_, ok := m.data[key]
if ok {
delete(m.data, key)
}
return ok
}
// Get gets a value from the DecayMap by key.
//
// If a value has expired, forcibly delete it if it was not updated.
@@ -61,13 +90,12 @@ func (m *Impl[K, V]) Get(key K) (V, bool) {
}
if time.Now().After(value.expiry) {
m.lock.Lock()
// Since previously reading m.data[key], the value may have been updated.
// Delete the entry only if the expiry time is still the same.
if m.data[key].expiry.Equal(value.expiry) {
delete(m.data, key)
// Defer decay deletion to the background worker to avoid convoy.
select {
case m.deleteCh <- deleteReq[K]{key: key, expiry: value.expiry}:
default:
// Channel full: drop request; a future Cleanup() or Get will retry.
}
m.lock.Unlock()
return Zilch[V](), false
}
@@ -105,3 +133,64 @@ func (m *Impl[K, V]) Len() int {
defer m.lock.RUnlock()
return len(m.data)
}
// Close stops the background cleanup worker. It's optional to call; maps live
// for the process lifetime in many cases. Call in tests or when you know you no
// longer need the map to avoid goroutine leaks.
func (m *Impl[K, V]) Close() {
close(m.stopCh)
m.wg.Wait()
}
// cleanupWorker batches decay deletions to minimize lock contention.
func (m *Impl[K, V]) cleanupWorker() {
defer m.wg.Done()
batch := make([]deleteReq[K], 0, 64)
ticker := time.NewTicker(10 * time.Millisecond)
defer ticker.Stop()
flush := func() {
if len(batch) == 0 {
return
}
m.applyDeletes(batch)
// reset batch without reallocating
batch = batch[:0]
}
for {
select {
case req := <-m.deleteCh:
batch = append(batch, req)
case <-ticker.C:
flush()
case <-m.stopCh:
// Drain any remaining requests then exit
for {
select {
case req := <-m.deleteCh:
batch = append(batch, req)
default:
flush()
return
}
}
}
}
}
func (m *Impl[K, V]) applyDeletes(batch []deleteReq[K]) {
now := time.Now()
m.lock.Lock()
for _, req := range batch {
entry, ok := m.data[req.key]
if !ok {
continue
}
// Only delete if the expiry is unchanged and already past.
if entry.expiry.Equal(req.expiry) && now.After(entry.expiry) {
delete(m.data, req.key)
}
}
m.lock.Unlock()
}

View File

@@ -7,6 +7,7 @@ import (
func TestImpl(t *testing.T) {
dm := New[string, string]()
t.Cleanup(dm.Close)
dm.Set("test", "hi", 5*time.Minute)
@@ -28,10 +29,24 @@ func TestImpl(t *testing.T) {
if ok {
t.Error("got value even though it was supposed to be expired")
}
// Deletion of expired entries after Get is deferred to a background worker.
// Assert it eventually disappears from the map.
deadline := time.Now().Add(200 * time.Millisecond)
for time.Now().Before(deadline) {
if dm.Len() == 0 {
break
}
time.Sleep(5 * time.Millisecond)
}
if dm.Len() != 0 {
t.Fatalf("expected background cleanup to remove expired key; len=%d", dm.Len())
}
}
func TestCleanup(t *testing.T) {
dm := New[string, string]()
t.Cleanup(dm.Close)
dm.Set("test1", "hi1", 1*time.Second)
dm.Set("test2", "hi2", 2*time.Second)

View File

@@ -19,5 +19,3 @@ npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Kubernetes manifests
/manifest

View File

@@ -1,10 +1,11 @@
FROM docker.io/library/node AS build
FROM docker.io/library/node:lts AS build
WORKDIR /app
COPY . .
RUN npm ci && npm run build
FROM docker.io/library/nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
FROM ghcr.io/xe/nginx-micro
COPY --from=build /app/build /www
COPY ./manifest/cfg/nginx/nginx.conf /conf
LABEL org.opencontainers.image.source="https://github.com/TecharoHQ/anubis"

View File

@@ -0,0 +1,248 @@
---
slug: release/v1.20.0
title: Anubis v1.20.0 is now available!
authors: [xe]
tags: [release]
image: sunburst.webp
---
![](./sunburst.webp)
Hey all!
Today we released [Anubis v1.20.0: Thancred Waters](https://github.com/TecharoHQ/anubis/releases/tag/v1.20.0). This adds a lot of new and exciting features to Anubis, including but not limited to the `WEIGH` action, custom weight thresholds, Imprint/impressum support, and a no-JS challenge. Here's what you need to know so you can protect your websites in new and exciting ways!
{/* truncate */}
## Sponsoring the product
If you rely on Anubis to keep your website safe, please consider sponsoring the project on [GitHub Sponsors](https://github.com/sponsors/Xe) or [Patreon](https://patreon.com/cadey). Funding helps pay hosting bills and offset the time spent on making this project the best it can be. Every little bit helps and when enough money is raised, [I can make Anubis my full-time job](https://github.com/TecharoHQ/anubis/discussions/278).
I am waiting to hear back from NLNet on if Anubis was selected for funding or not. Let's hope it is!
## Deprecation warning: `DIFFICULTY`
Anubis v1.20.0 is the last version to support the `DIFFICULTY` flag in the exact way it currently does. In future versions, this will be ineffectual and you should use the [custom threshold system](/docs/admin/configuration/thresholds) instead.
If this becomes an imposition in practice, this will be reverted.
## Chrome won't show "invalid response" after "Success!"
There were a bunch of smaller fixes in Anubis this time around, but the biggest one was finally squashing the ["invalid response" after "Success!" issue](https://github.com/TecharoHQ/anubis/issues/564) that had been plaguing Chrome users. This was a really annoying issue to track down but it was discovered while we were working on better end-to-end / functional testing: [Chrome randomizes the `Accept-Language` header](https://github.com/explainers-by-googlers/reduce-accept-language) so that websites can't do fingerprinting as easily.
When Anubis issues a challenge, it grabs [information that the browser sends to the user](/docs/design/how-anubis-works#challenge-format) to create a challenge string. Anubis doesn't store these challenge strings anywhere, and when a solution is being checked it calculates the challenge string from the request. This means that they'd get a challenge on one end, compute the response for that challenge, and then the server would validate that against a different challenge. This server-side validation would fail, leading to the user seeing "invalid response" after the client reported success.
I suspect this was why Vanadium and Cromite were having sporadic issues as well.
## New Features
The biggest feature in Anubis is the "weight" subsystem. This allows administrators to make custom rules that change the suspicion level of a request without having to take immediate action. As an example, consider the self-hostable git forge [Gitea](https://about.gitea.com/). When you load a page in Gitea, it creates a session cookie that your browser sends with every request. Weight allows you to mark a request that includes a Gitea session token as _less_ suspicious:
```yaml
- name: gitea-session-token
action: WEIGH
expression:
all:
# Check if the request has a Cookie header
- '"Cookie" in headers'
# Check if the request's Cookie header contains the Gitea session token
- headers["Cookie"].contains("i_love_gitea=")
# Remove 5 weight points
weight:
adjust: -5
```
This is different from the past where you could only allow every request with a Gitea session token, meaning that the invention of lying would allow malicious clients to bypass protection.
Weight is added and removed whenever a `WEIGH` rule is encountered. When all rules are processed and the request doesn't match any `ALLOW`, `CHALLENGE`, or `DENY` rules, Anubis uses [weight thresholds](/docs/admin/configuration/thresholds) to figure out how to handle that request. Thresholds are defined in the [policy file](/docs/admin/policies) alongside your bot rules:
```yaml
thresholds:
- name: minimal-suspicion # This client is likely fine, its soul is lighter than a feather
expression: weight <= 0 # a feather weighs zero units
action: ALLOW # Allow the traffic through
# For clients that had some weight reduced through custom rules, give them a
# lightweight challenge.
- name: mild-suspicion
expression:
all:
- weight > 0
- weight < 10
action: CHALLENGE
challenge:
# https://anubis.techaro.lol/docs/admin/configuration/challenges/metarefresh
algorithm: metarefresh
difficulty: 1
report_as: 1
# For clients that are browser-like but have either gained points from custom rules or
# report as a standard browser.
- name: moderate-suspicion
expression:
all:
- weight >= 10
- weight < 20
action: CHALLENGE
challenge:
# https://anubis.techaro.lol/docs/admin/configuration/challenges/proof-of-work
algorithm: fast
difficulty: 2 # two leading zeros, very fast for most clients
report_as: 2
# For clients that are browser like and have gained many points from custom rules
- name: extreme-suspicion
expression: weight >= 20
action: CHALLENGE
challenge:
# https://anubis.techaro.lol/docs/admin/configuration/challenges/proof-of-work
algorithm: fast
difficulty: 4
report_as: 4
```
:::note
If you don't have thresholds defined in your Anubis policy file, Anubis will default to the "legacy" behaviour where browser-like clients get a challenge at the default difficulty.
:::
This lets most clients through if they pass a simple [proof of work challenge](/docs/admin/configuration/challenges/proof-of-work), but any clients that are less suspicious (like ones with a Gitea session token) are given the lightweight [Meta Refresh](/docs/admin/configuration/challenges/metarefresh) challenge instead.
Threshold expressions are like [Bot rule expressions](/docs/admin/configuration/expressions), but there's only one input: the request's weight. If no thresholds match, the request is allowed through.
### Imprint/Impressum Support
European countries like Germany [require an imprint/impressum](https://www.ionos.com/digitalguide/websites/digital-law/a-case-for-thinking-global-germanys-impressum-laws/) to be present in the footer of their website. This allows users to contact someone on the team behind a website in case they run into issues. This also must generally have a separate page where users can view an extended imprint with other information like a privacy policy or a copyright notice.
Anubis v1.20.0 and later [has support for showing imprints](/docs/admin/configuration/impressum). You can configure two kinds of imprints:
1. An imprint that is shown in the footer of every Anubis page.
2. An extended imprint / privacy policy that is shown when users click on the "Imprint" link. For example, [here's the imprint for the website you're looking at right now](https://anubis.techaro.lol/.within.website/x/cmd/anubis/api/imprint).
Imprints are configured in [the policy file](/docs/admin/policies/):
```yaml
impressum:
# Displayed at the bottom of every page rendered by Anubis.
footer: >-
This website is hosted by Zombocom. If you have any complaints or notes
about the service, please contact
<a href="mailto:contact@zombocom.example">contact@zombocom.example</a> and
we will assist you as soon as possible.
# The imprint page that will be linked to at the footer of every Anubis page.
page:
# The HTML <title> of the page
title: Imprint and Privacy Policy
# The HTML contents of the page. The exact contents of this page can
# and will vary by locale. Please consult with a lawyer if you are not
# sure what to put here.
body: >-
<p>Last updated: June 2025</p>
<h2>Information that is gathered from visitors</h2>
<p>In common with other websites, log files are stored on the web server
saving details such as the visitor's IP address, browser type, referring
page and time of visit.</p>
<p>Cookies may be used to remember visitor preferences when interacting
with the website.</p>
<p>Where registration is required, the visitor's email and a username
will be stored on the server.</p>
<!-- ... -->
```
If this is insufficient, please [file an issue](https://github.com/TecharoHQ/anubis/issues/new) with a link to the relevant legislation for your country so that this feature can be amended and improved.
### No-JS Challenge
One of the first issues in Anubis before it was moved to the [TecharoHQ org](https://github.com/TecharoHQ) was a request [to support challenging browsers without using JavaScript](https://github.com/Xe/x/issues/651). This is a pretty challenging thing to do without rethinking how Anubis works from a fundamentally low level, and with v1.20.0, [Anubis finally has support for running without client-side JavaScript](https://github.com/TecharoHQ/anubis/issues/95) thanks to the [Meta Refresh](/docs/admin/configuration/challenges/metarefresh) challenge.
When Anubis decides it needs to send a challenge to your browser, it sends a challenge page. Historically, this challenge page is [an HTML template](https://github.com/TecharoHQ/anubis/blob/main/web/index.templ) that kicks off some JavaScript, reads the challenge information out of the page body, and then solves it as fast as possible in order to let users see the website they want to visit.
In v1.20.0, Anubis has a challenge registry to hold [different client challenge implementations](/docs/admin/configuration/challenges/). This allows us to implement anything we want as long as it can render a page to show a challenge and then check if the result is correct. This is going to be used to implement a WebAssembly-based proof of work option (one that will be way more efficient than the existing browser JS version), but as a proof of concept I implemented a simple challenge using [HTML `<meta refresh>`](https://en.wikipedia.org/wiki/Meta_refresh).
In my testing, this has worked with every browser I have thrown it at (including CLI browsers, the browser embedded in emacs, etc.). The default configuration of Anubis does use the [meta refresh challenge](/docs/admin/configuration/challenges/metarefresh) for [clients with a very low suspicion](/docs/admin/configuration/thresholds), but by default clients will be sent an [easy proof of work challenge](/docs/admin/configuration/challenges/proof-of-work).
If the false positive rate of this challenge turns out to not be very high in practice, the meta refresh challenge will be enabled by default for browsers in future versions of Anubis.
### `robots2policy`
Anubis was created because crawler bots don't respect [`robots.txt` files](https://www.robotstxt.org/). Administrators have been working on refining and crafting their `robots.txt` files for years, and one common comment is that people don't know where to start crafting their own rules.
Anubis now ships with a [`robots2policy` tool](/docs/admin/robots2policy) that lets you convert your `robots.txt` file to an Anubis policy.
```text
robots2policy -input https://github.com/robots.txt
```
:::note
If you installed Anubis from [an OS package](/docs/admin/native-install), you may need to run `anubis-robots2policy` instead of `robots2policy`.
:::
We hope that this will help you get started with Anubis faster. We are working on a version of this that will run in the documentation via WebAssembly.
### Open Graph configuration is being moved to the policy file
Anubis supports reading [Open Graph tags](/docs/admin/configuration/open-graph) from target services and returning them in challenge pages. This makes the right metadata show up when linking services protected by Anubis in chat applications or on social media.
In order to test the migration of all of the configuration to the policy file, Open Graph configuration has been moved to the policy file. For more information, please read [the Open Graph configuration options](/docs/admin/configuration/open-graph#configuration-options).
You can also set default Open Graph tags:
```yaml
openGraph:
enabled: true
ttl: 24h
# If set, return these opengraph values instead of looking them up with
# the target service.
#
# Correlates to properties in https://ogp.me/
override:
# og:title is required, it is the title of the website
"og:title": "Techaro Anubis"
"og:description": >-
Anubis is a Web AI Firewall Utility that helps you fight the bots
away so that you can maintain uptime at work!
"description": >-
Anubis is a Web AI Firewall Utility that helps you fight the bots
away so that you can maintain uptime at work!
```
## Improvements and optimizations
One of the biggest improvements we've made in v1.20.0 is replacing [SHA-256 with xxhash](https://github.com/TecharoHQ/anubis/pull/676). Anubis uses hashes all over the place to help with identifying clients, matching against rules when allowing traffic through, in error messages sent to users, and more. Historically these have been done with [SHA-256](https://en.wikipedia.org/wiki/SHA-2), however this has been having a mild performance impact in real-world use. As a result, we now use [xxhash](https://xxhash.com/) when possible. This makes policy matching 3x faster in some scenarios and reduces memory usage across the board.
Anubis now uses [bart](https://pkg.go.dev/github.com/gaissmai/bart) for doing IP address matching when you specify addresses in a `remote_address` check configuration or when you are matching against [advanced checks](/docs/admin/thoth). This uses the same kind of IP address routing configuration that your OS kernel does, making it very fast to query information about IP addresses. This makes IP address range matches anywhere from 3-14 times faster depending on the number of addresses it needs to match against. For more information and benchmarks, check out [@JasonLovesDoggo](https://github.com/JasonLovesDoggo)'s PR: [perf: replace cidranger with bart for significant performance improvements #675](https://github.com/TecharoHQ/anubis/pull/675).
## What's up next?
v1.21.0 is already shaping up to be a massive improvement as Anubis adds [internationalization](https://en.wikipedia.org/wiki/Internationalization) support, allowing your users to see its messages in the language they're most comfortable with.
So far Anubis supports the following languages:
- English (Simplified and Traditional)
- French
- Portugese (Brazil)
- Spanish
If you want to contribute translations, please [file an issue](https://github.com/TecharoHQ/anubis/issues/new) with your language of choice or submit a pull request to [the `lib/localization/locales` folder](https://github.com/TecharoHQ/anubis/tree/main/lib/localization/locales). We are about to introduce features to the translation stack, so you may want to hold off a hot minute, but we welcome any and all contributions to making Anubis useful to a global audience.
Other things we plan to do:
- Move configuration to the policy file
- Support reloading the policy file at runtime without having to restart Anubis
- Detecting if a client is "brand new"
- A [Valkey](https://valkey.io/)-backed store for sharing information between instances of Anubis
- Augmenting No-JS support in the paid product
- TLS fingerprinting
- Automated testing improvements in CI (FreeBSD CI support, better automated integration/functional testing, etc.)
## Conclusion
I hope that these features let you get the same Anubis power you've come to know and love and increases the things you can do with it! I've been really excited to ship [thresholds](/docs/admin/configuration/thresholds) and the cloud-based services for Anubis.
If you run into any problems, please [file an issue](https://github.com/TecharoHQ/anubis/issues/new). Otherwise, have a good day and get back to making your communities great.

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.2 KiB

View File

@@ -0,0 +1,105 @@
---
slug: incident/TI-20250709-0001
title: "TI-20250709-0001: IPv4 traffic failures for Techaro services"
authors: [xe]
tags: [incident]
image: ./window-portal.jpg
---
![](./window-portal.jpg)
Techaro services were down for IPv4 traffic on July 9th, 2025. This blogpost is a report of what happened, what actions were taken to resolve the situation, and what actions are being done in the near future to prevent this problem. Enjoy this incident report!
{/* truncate */}
:::note
In other companies, this kind of documentation would be kept internal. At Techaro, we believe that you deserve radical candor and the truth. As such, we are proving our lofty words with actions by publishing details about how things go wrong publicly.
Everything past this point follows my standard incident root cause meeting template.
:::
This incident report will focus on the services affected, timeline of what happened at which stage of the incident, where we got lucky, the root cause analysis, and what action items are being planned or taken to prevent this from happening in the future.
## Timeline
All events take place on July 9th, 2025.
| Time (UTC) | Description |
| :--------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 12:32 | Uptime Kuma reports that another unrelated website on the same cluster was timing out. |
| 12:33 | Uptime Kuma reports that Thoth's production endpoint is failing gRPC health checks. |
| 12:35 | Investigation begins, [announcement made on Xe's Bluesky](https://bsky.app/profile/xeiaso.net/post/3ltjtdczpwc2x) due to the impact including their personal blog. |
| 12:39 | `nginx-ingress` logs on the production cluster show IPv6 traffic but an abrupt cutoff in IPv4 traffic around 12:32 UTC. Ticket is opened with the hosting provider. |
| 12:41 | IPv4 traffic resumes long enough for Uptime Kuma to report uptime, but then immediately fails again. |
| 12:46 | IPv4 traffic resumes long enough for Uptime Kuma to report uptime, but then immediately fails again. (repeat instances of this have been scrubbed, but it happened about every 5-10 minutes) |
| 12:48 | First reply from the hosting provider. |
| 12:57 | Reply to hosting provider, ask to reboot the load balancer. |
| 13:00 | Incident responder because busy due to a meeting under the belief that the downtime was out of their control and that uptime monitoring software would let them know if it came back up. |
| 13:20 | Incident responder ended meeting and went back to monitoring downtime and preparing this document. |
| 13:34 | IPv4 traffic starts to show up in the `ingress-nginx` logs. |
| 13:35 | All services start to report healthy. Incident status changes to monitoring. |
| 13:48 | Incident closed. |
| 14:07 | Incident re-opened. Issues seem to be manifesting as BGP issues in the upstream provider. |
| 14:10 | IPv4 traffic resumes and then stops. |
| 14:18 | IPv4 traffic resumes again. Incident status changes to monitoring. |
| 14:40 | Incident closed. |
## Services affected
| Service name | User impact |
| :-------------------------------------------------- | :----------------- |
| [Anubis Docs](https://anubis.techaro.lol) (IPv4) | Connection timeout |
| [Anubis Docs](https://anubis.techaro.lol) (IPv6) | None |
| [Thoth](/docs/admin/thoth/) (IPv4) | Connection timeout |
| [Thoth](/docs/admin/thoth/) (IPv6) | None |
| Other websites colocated on the same cluster (IPv4) | Connection timeout |
| Other websites colocated on the same cluster (IPv6) | None |
## Root cause analysis
In simplify server management, Techaro runs a [Kubernetes](https://kubernetes.io/) cluster on [Vultr VKE](https://www.vultr.com/kubernetes/) (Vultr Kubernetes Engine). When you do this, Vultr needs to provision a [load balancer](https://docs.vultr.com/how-to-use-a-vultr-load-balancer-with-vke) to bridge the gap between the outside world and the Kubernetes world, kinda like this:
```mermaid
---
title: Overall architecture
---
flowchart LR
UT(User Traffic)
subgraph Provider Infrastructure
LB[Load Balancer]
end
subgraph Kubernetes
IN(ingress-nginx)
TH(Thoth)
AN(Anubis Docs)
OS(Other sites)
IN --> TH
IN --> AN
IN --> OS
end
UT --> LB --> IN
```
Techaro controls everything inside the Kubernetes side of that diagram. Anything else is out of our control. That load balancer is routed to the public internet via [Border Gateway Protocol (BGP)](https://en.wikipedia.org/wiki/Border_Gateway_Protocol).
If there is an interruption with the BGP sessions in the upstream provider, this can manifest as things either not working or inconsistently working. This is made more difficult by the fact that the IPv4 and IPv6 internets are technically separate networks. With this in mind, it's very possible to have IPv4 traffic fail but not IPv6 traffic.
The root cause is that the hosting provider we use for production services had flapping IPv4 BGP sessions in its Toronto region. When this happens all we can do is open a ticket and wait for it to come back up.
## Where we got lucky
The Uptime Kuma instance that caught this incident runs on an IPv4-only network. If it was dual stack, this would not have been caught as quickly.
The `ingress-nginx` logs print IP addresses of remote clients to the log feed. If this was not the case, it would be much more difficult to find this error.
## Action items
- A single instance of downtime like this is not enough reason to move providers. Moving providers because of this is thus out of scope.
- Techaro needs a status page hosted on a different cloud provider than is used for the production cluster (`TecharoHQ/TODO#6`).
- Health checks for IPv4 and IPv6 traffic need to be created (`TecharoHQ/TODO#7`).
- Remove the requirement for [Anubis to pass Thoth health checks before it can start if Thoth is enabled](https://github.com/TecharoHQ/anubis/pull/794).

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

View File

@@ -0,0 +1,369 @@
---
slug: release/v1.21.1
title: Anubis v1.21.1 is now available!
authors: [xe]
tags: [release]
image: anubis-i18n.webp
---
![](./anubis-i18n.webp)
Hey all!
Recently we released [Anubis v1.21.1: Minfilia Warde (Echo 1)](https://github.com/TecharoHQ/anubis/releases/tag/v1.21.1). This is a fairly meaty release and like [last time](../2025-06-27-release-1.20.0/index.mdx) this blogpost will tell you what you need to know before you update. Kick back, get some popcorn and let's dig into this!
{/* truncate */}
In this release, Anubis becomes internationalized, gains the ability to use system load as input to issuing challenges, finally fixes the "invalid response" after "success" bug, and more! Please read these notes before upgrading as the changes are big enough that administrators should take action to ensure that the upgrade goes smoothly.
This release is brought to you by [FreeCAD](https://www.freecad.org/), an open-source computer aided design tool that lets you design things for the real world.
## What's in this release?
The biggest change is that the ["invalid response" after "success" bug](https://github.com/TecharoHQ/anubis/issues/564) is now finally fixed for good by totally rewriting how [Anubis' challenge issuance flow works](#challenge-flow-v2).
This release gives Anubis the following features:
- [Internationalization support](#internationalization), allowing Anubis to render its messages in the human language you speak.
- Anubis now supports the [`missingHeader`](#missingHeader-function) function to assert the absence of headers in requests.
- Anubis now has the ability to [store data persistently on the server](#persistent-data-storage).
- Anubis can use [the system load average](#load-average-checks) as a factor to determine if it needs to filter traffic or not.
- Add `COOKIE_SECURE` option to set the cookie [Secure flag](https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Cookies#block_access_to_your_cookies)
- Sets cookie defaults to use [SameSite: None](https://web.dev/articles/samesite-cookies-explained)
- Allow [Common Crawl](https://commoncrawl.org/) by default so scrapers have less incentive to scrape
- Add `/healthz` metrics route for use in platform-based health checks.
- Start exposing JA4H fingerprints for later use in CEL expressions.
And this release also fixes the following bugs:
- [Challenge issuance has been totally rewritten](#challenge-flow-v2) to finally squash the infamous ["invalid response" after "success" bug](https://github.com/TecharoHQ/anubis/issues/564) for good.
- In order to reduce confusion, the "Success" interstitial that shows up when you pass a proof of work challenge has been removed.
- Don't block Anubis starting up if [Thoth](/docs/admin/thoth/) health checks fail.
- The "Try again" button on the error page has been fixed. Previously it meant "try the solution again" instead of "try the challenge again".
- In certain cases, a user could be stuck with a test cookie that is invalid, locking them out of the service for up to half an hour. This has been fixed with better validation of this case and clearing the cookie.
- "Proof of work" has been removed from the branding due to some users having extremely negative connotations with it.
We try to avoid introducing breaking changes as much as possible, but these are the changes that may be relevant for you as an administrator:
- The [challenge format](#challenge-format-change) has been changed in order to account for [the new challenge issuance flow](#challenge-flow-v2).
- The [systemd service `RuntimeDirectory` has been changed](#breaking-change-systemd-runtimedirectory-change).
### Sponsoring the project
If you rely on Anubis to keep your website safe, please consider sponsoring the project on [GitHub Sponsors](https://github.com/sponsors/Xe) or [Patreon](https://patreon.com/cadey). Funding helps pay hosting bills and offset the time spent on making this project the best it can be. Every little bit helps and when enough money is raised, [I can make Anubis my full-time job](https://github.com/TecharoHQ/anubis/discussions/278).
Once this pie chart is at 100%, I can start to reduce my hours at my day job as most of my needs will be met (pre-tax):
```mermaid
pie title Funding update
"GitHub Sponsors" : 29
"Patreon" : 14
"Remaining" : 56
```
I am waiting to hear back from NLNet on if Anubis was selected for funding or not. Let's hope it is!
## New features
### Internationalization
Anubis now supports localized responses. Locales can be added in [lib/localization/locales/](https://github.com/TecharoHQ/anubis/tree/main/lib/localization/locales). This release includes support for the following languages:
- [Brazilian Portugese](https://github.com/TecharoHQ/anubis/pull/726)
- [Chinese (Simplified)](https://github.com/TecharoHQ/anubis/pull/774)
- [Chinese (Traditional)](https://github.com/TecharoHQ/anubis/pull/759)
- [Czech](https://github.com/TecharoHQ/anubis/pull/849)
- English
- [Estonian](https://github.com/TecharoHQ/anubis/pull/783)
- [Filipino](https://github.com/TecharoHQ/anubis/pull/775)
- [Finnish](https://github.com/TecharoHQ/anubis/pull/863)
- [French](https://github.com/TecharoHQ/anubis/pull/716)
- [German](https://github.com/TecharoHQ/anubis/pull/741)
- [Japanese](https://github.com/TecharoHQ/anubis/pull/772)
- [Icelandic](https://github.com/TecharoHQ/anubis/pull/780)
- [Italian](https://github.com/TecharoHQ/anubis/pull/778)
- [Norwegian](https://github.com/TecharoHQ/anubis/pull/855)
- [Russian](https://github.com/TecharoHQ/anubis/pull/882)
- [Spanish](https://github.com/TecharoHQ/anubis/pull/716)
- [Turkish](https://github.com/TecharoHQ/anubis/pull/751)
If facts or local regulations demand, you can set Anubis default language with the `FORCED_LANGUAGE` environment variable or the `--forced-language` command line argument:
```sh
FORCED_LANGUAGE=de
```
## Big ticket bug fixes
These issues affect every user of Anubis. Administrators should upgrade Anubis as soon as possible to mitigate them.
### Fix event loop thrashing when solving a proof of work challenge
Anubis has a progress bar so that users can have something moving while it works. This gives users more confidence that something is happening and that the website is not being malicious with CPU usage. However, the way it was implemented way back in [#87](https://github.com/TecharoHQ/anubis/pull/87) had a subtle bug:
```js
if (
(nonce > oldNonce) | 1023 && // we've wrapped past 1024
(nonce >> 10) % threads === threadId // and it's our turn
) {
postMessage(nonce);
}
```
The logic here looks fine but is subtly wrong as was reported in [#877](https://github.com/TecharoHQ/anubis/issues/877) by the main Pale Moon developer.
For context, `nonce` is a counter that increments by the worker count every loop. This is intended to spread the load between CPU cores as such:
| Iteration | Worker ID | Nonce |
| :-------- | :-------- | :---- |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
| 2 | 0 | 2 |
| 2 | 1 | 3 |
And so on. This makes the proof of work challenge as fast as it can possibly be so that Anubis quickly goes away and you can enjoy the service it is protecting.
The incorrect part of this is the boolean logic, specifically the part with the bitwise or `|`. I think the intent was to use a logical or (`||`), but this had the effect of making the `postMessage` handler fire on every iteration. The intent of this snippet (as the comment clearly indicates) is to make sure that the main event loop is only updated with the worker status every 1024 iterations per worker. This had the opposite effect, causing a lot of messages to be sent from workers to the parent JavaScript context.
This is bad for the event loop.
Instead, I have ripped out that statement and replaced it with a much simpler increment only counter that fires every 1024 iterations. Additionally, only the first thread communicates back to the parent process. This does mean that in theory the other workers could be ahead of the first thread (posting a message out of a worker has a nonzero cost), but in practice I don't think this will be as much of an issue as the current behaviour is.
The root cause of the stack exhaustion is likely the pressure caused by all of the postMessage futures piling up. Maybe the larger stack size in 64 bit environments is causing this to be fine there, maybe it's some combination of newer hardware in 64 bit systems making this not be as much of a problem due to it being able to handle events fast enough to keep up with the pressure.
Either way, thanks much to [@wolfbeast](https://github.com/wolfbeast) and the Pale Moon community for finding this. This will make Anubis faster for everyone!
### Fix potential memory leak when discovering a solution
In some cases, the parallel solution finder in Anubis could cause all of the worker promises to leak due to the fact the promises were being improperly terminated. A recursion bomb happens in the following scenario:
1. A worker sends a message indicating it found a solution to the proof of work challenge.
2. The `onmessage` handler for that worker calls `terminate()`
3. Inside `terminate()`, the parent process loops through all other workers and calls `w.terminate()` on them.
4. It's possible that terminating a worker could lead to the `onerror` event handler.
5. This would create a recursive loop of `onmessage` -> `terminate` -> `onerror` -> `terminate` -> `onerror` and so on.
This infinite recursion quickly consumes all available stack space, but this has never been noticed in development because all of my computers have at least 64Gi of ram provisioned to them under the axiom paying for more ram is cheaper than paying in my time spent having to work around not having enough ram. Additionally, ia32 has a smaller base stack size, which means that they will run into this issue much sooner than users on other CPU architectures will.
The fix adds a boolean `settled` flag to prevent termination from running more than once.
## Expressions features
Anubis v1.21.1 adds additional [expressions](/docs/admin/configuration/expressions) features so that you can make your request matching even more granular.
### `missingHeader` function
Anubis [expressions](/docs/admin/configuration/expressions) have [a few functions exposed](/docs/admin/configuration/expressions/#functions-exposed-to-anubis-expressions). Anubis v1.21.1 adds the `missingHeader` function, allowing you to assert the _absence_ of a header in requests.
Let's say you're getting a lot of requests from clients that are pretending to be Google Chrome. Google Chrome sends a few signals to web servers, the main one of them is the [`Sec-Ch-Ua`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Sec-CH-UA). Sec-CH-UA is part of Google's [User Agent Client Hints](https://wicg.github.io/ua-client-hints/#sec-ch-ua) proposal, but it being present is a sign that the client is more likely Google Chrome than not. With the `missingHeader` function, you can write a rule to [add weight](/docs/admin/policies/#request-weight) to requests without `Sec-Ch-Ua` that claim to be Google Chrome.
```yaml
# Adds weight clients that claim to be Google Chrome without setting Sec-Ch-Ua
- name: old-chrome
action: WEIGH
weight:
adjust: 10
expression:
all:
- userAgent.matches("Chrome/[1-9][0-9]?\\.0\\.0\\.0")
- missingHeader(headers, "Sec-Ch-Ua")
```
When combined with [weight thresholds](/docs/admin/configuration/thresholds), this allows you to make requests that don't match the signature of Google Chrome more suspicious, which will make them have a more difficult challenge.
### Load average checks
Anubis can dynamically take action [based on the system load average](/docs/admin/configuration/expressions/#using-the-system-load-average), allowing you to write rules like this:
```yaml
## System load based checks.
# If the system is under high load for the last minute, add weight.
- name: high-load-average
action: WEIGH
expression: load_1m >= 10.0 # make sure to end the load comparison in a .0
weight:
adjust: 20
# If it is not for the last 15 minutes, remove weight.
- name: low-load-average
action: WEIGH
expression: load_15m <= 4.0 # make sure to end the load comparison in a .0
weight:
adjust: -10
```
Something to keep in mind about system load average is that it is not aware of the number of cores the system has. If you have a 16 core system that has 16 processes running but none of them is hogging the CPU, then you will get a load average below 16. If you are in doubt, make your "high load" metric at least two times the number of CPU cores and your "low load" metric at least half of the number of CPU cores. For example:
| Kind | Core count | Load threshold |
| --------: | :--------- | :------------- |
| high load | 4 | `8.0` |
| low load | 4 | `2.0` |
| high load | 16 | `32.0` |
| low load | 16 | `8` |
Also keep in mind that this does not account for other kinds of latency like I/O latency or downstream API response latency. A system can have its web applications unresponsive due to high latency from a MySQL server but still have that web application server report a load near or at zero.
:::note
This does not work if you are using Kubernetes.
:::
When combined with [weight thresholds](/docs/admin/configuration/thresholds), this allows you to make incoming sessions "back off" while the server is under high load.
## Challenge flow v2
The main goal of Anubis is to weigh the risks of incoming requests in order to protect upstream resources against abusive clients like badly written scrapers. In order to separate "good" clients (like users wanting to learn from a website's content) from "bad" clients, Anubis issues [challenges](/docs/admin/configuration/challenges/).
Previously the Anubis challenge flow looked like this:
```mermaid
---
title: Old Anubis challenge flow
---
flowchart LR
user(User Browser)
subgraph Anubis
mIC{Challenge?}
ic(Issue Challenge)
rp(Proxy to service)
mIC -->|User needs a challenge| ic
mIC -->|User does not need a challenge| rp
end
target(Target Service)
rp --> target
user --> mIC
ic -->|Pass a challenge| user
target -->|Site data| users
```
In order to issue a challenge, Anubis generated a challenge string based on request metadata that we assumed wouldn't drastically change between requests, including but not limited to:
- The client's User-Agent string.
- The client [`Accept-Language` header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Accept-Language) value.
- The client's IP address.
Anubis also didn't store any information about challenges so that it can remain lightweight and handle the onslaught of requests from scrapers. The assumption was that the challenge string function was idempotent per client across time. What actually ended up happening was something like this:
```mermaid
---
title: Anubis challenge string idempotency
---
sequenceDiagram
User->>+Anubis: GET /wiki/some-page
Anubis->>+Make Challenge: Generate a challenge string
Make Challenge->>-Anubis: Challenge string: taco salad
Anubis->>-User: HTTP 401 solve a challenge
User->>+Anubis: GET internal-api/pass-challenge
Anubis->>+Make Challenge: Generate a challenge string
Make Challenge->>-Anubis: Challenge string: burrito bar
Anubis->>+User: Error: invalid response
```
Various attempts were made to fix this. All of these ended up failing. Many difficulties were discovered including but not limited to:
- Removing `Accept-Language` from consideration because [Chrome randomizes the contents of `Accept-Language` to reduce fingerprinting](https://github.com/explainers-by-googlers/reduce-accept-language), a behaviour which [causes a lot of confusion](https://www.reddit.com/r/chrome/comments/nhpnez/google_chrome_is_randomly_switching_languages_on/) for users with multiple system languages selected.
- [IPv6 privacy extensions](https://www.internetsociety.org/resources/deploy360/2014/privacy-extensions-for-ipv6-slaac/) mean that each request could be coming from a different IP address (at least one legitimate user in the wild has been observed to have a different IP address per TCP session across an entire `/48`).
- Some [US mobile phone carriers make it too easy for your IP address to drastically change](https://news.ycombinator.com/item?id=32038215) without user input.
- [Happy eyeballs](https://en.wikipedia.org/wiki/Happy_Eyeballs) means that some requests can come in over IPv4 and some requests can come in over IPv6.
- To make things worse, you can't even assert that users are from the same [BGP autonomous system](<https://en.wikipedia.org/wiki/Autonomous_system_(Internet)>) because some users could have ISPs that are IPv4 only, forcing them to use a different IP address space to get IPv6 internet access. This sounds like it's rare enough, but I personally have to do this even though I pay for 8 gigabit fiber from my ISP and only get IPv4 service from them.
Amusingly enough, the only part of this that has survived is the assertion that a user hasn't changed their `User-Agent` string. Maybe [that one guy that sets his Chrome version to `150`](https://github.com/TecharoHQ/anubis/issues/239) would have issues, but so far I've not seen any evidence that a client randomly changing their user agent between challenge issuance and solving can possibly be legitimate.
As a result, the entire subsystem that generated challenges before had to be ripped out and rewritten from scratch.
It was replaced with a new flow that stores data on the server side, compares that data against what the client responds with, and then checks pass/fail that way:
```mermaid
---
title: New challenge flow
---
sequenceDiagram
User->>+Anubis: GET /wiki/some-page
Anubis->>+Make Challenge: Generate a challenge string
Make Challenge->>+Store: Store info for challenge 1234
Make Challenge->>-Anubis: Challenge string: taco salad, ID 1234
Anubis->>-User: HTTP 401 solve a challenge
User->>+Anubis: GET internal-api/pass-challenge, challenge 1234
Anubis->>+Validate Challenge: verify challenge 1234
Validate Challenge->>+Store: Get info for challenge 1234
Store->>-Validate Challenge: Here you go!
Validate Challenge->>-Anubis: Valid ✅
Anubis->>+User: Here's a cookie to get past Anubis
```
As a result, the [challenge format](#challenge-format-change) had to change. Old cookies will still be validated, but the next minor version (v1.22.0) will include validation to ensure that all challenges are accounted for on the server side. This data is stored in the active [storage backend](/docs/admin/policies/#storage-backends) for up to 30 minutes. This also fixes [#746](https://github.com/TecharoHQ/anubis/issues/746) and other similar instances of this issue.
### Challenge format change
Previously Anubis did no accounting for challenges that it issued. This means that if Anubis restarted during a client, the client would be able to proceed once Anubis came back online.
During the upgrade to v1.21.0 and when v1.21.0 (or later) restarts with the [in-memory storage backend](/docs/admin/policies/#memory), you may see a higher rate of failed challenges than normal. If this persists beyond a few minutes, [open an issue](https://github.com/TecharoHQ/anubis/issues/new).
If you are using the in-memory storage backend, please consider using [a different storage backend](/docs/admin/policies/#storage-backends).
### Storage
Anubis offers a few different storage backends depending on your needs:
| Backend | Description |
| :--------------------------------------- | :------------------------------------------------------------------------------------------------------------- |
| [`memory`](/docs/admin/policies/#memory) | An in-memory hashmap that is cleared when Anubis is restarted. |
| [`bbolt`](/docs/admin/policies/#bbolt) | A memory-mapped key/value store that can persist between Anubis restarts. |
| [`valkey`](/docs/admin/policies/#valkey) | A networked key/value store that can persist between Anubis restarts and coordinate across multiple instances. |
Please review the documentation for each storage method to figure out the one best for your needs. If you aren't sure, consult this diagram:
```mermaid
---
title: What storage backend do I need?
---
flowchart TD
OneInstance{Do you only have
one instance of
Anubis?}
Persistence{Do you have
persistent disk
access in your
environment?}
bbolt[(bbolt)]
memory[(memory)]
valkey[(valkey)]
OneInstance -->|Yes| Persistence
OneInstance -->|No| valkey
Persistence -->|Yes| bbolt
Persistence -->|No| memory
```
## Breaking change: systemd `RuntimeDirectory` change
The following potentially breaking change applies to native installs with systemd only:
Each instance of systemd service template now has a unique `RuntimeDirectory`, as opposed to each instance of the service sharing a `RuntimeDirectory`. This change was made to avoid [the `RuntimeDirectory` getting nuked](https://github.com/TecharoHQ/anubis/issues/748) any time one of the Anubis instances restarts.
If you configured Anubis' unix sockets to listen on `/run/anubis/foo.sock` for instance `anubis@foo`, you will need to configure Anubis to listen on `/run/anubis/foo/foo.sock` and additionally configure your HTTP load balancer as appropriate.
If you need the legacy behaviour, install this [systemd unit dropin](https://www.flatcar.org/docs/latest/setup/systemd/drop-in-units/):
```systemd
# /etc/systemd/system/anubis@.service.d/50-runtimedir.conf
[Service]
RuntimeDirectory=anubis
```
Just keep in mind that this will cause problems when Anubis restarts.
## What's up next?
The biggest things we want to do in the next release (in no particular order):
- A rewrite of bot checking rule configuration syntax to make it less ambiguous.
- [JA4](https://blog.foxio.io/ja4+-network-fingerprinting) (and other forms of) fingerprinting and coordination with [Thoth](/docs/admin/thoth/) to allow clients with high aggregate pass rates through without seeing Anubis at all.
- Advanced heuristics for [users of the unbranded variant of Anubis](/docs/admin/botstopper/).
- Optimize the release flow so that releases can be triggered and executed by continuous integration tools. The ultimate goal is to make it possible to release Anubis in 15 minutes after pressing a single "mint release" button.
- Add "hot reloading" support to Anubis, allowing administrators to update the rules without restarting the service.
- Fix [multiple slash support](https://github.com/TecharoHQ/anubis/issues/754) for web applications that require optional path variables.
- Add weight to "brand new" clients.
- Implement a "maze" feature that tries to get crawlers ensnared in a maze of random links so that clients that are more than 20 links in can be reported to the home base.
- Open [Thoth-based advanced checks](/docs/admin/thoth/) to more users with an easier onboarding flow.
- More smoke tests including for browsers like [Pale Moon](https://www.palemoon.org/).

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

View File

@@ -0,0 +1,60 @@
---
slug: 2025/funding-update
title: Funding update
authors: [xe]
tags: [funding]
image: around-the-bend.webp
---
![](./around-the-bend.webp)
As we finish up work on [all of the features in the next release of Anubis](/docs/CHANGELOG#unreleased), I took a moment to add up the financials and here's an update on the recurring revenue of the project. Once I reach the [$5000 per month](https://github.com/TecharoHQ/anubis/discussions/278) mark, I can start reducing hours at my dayjob and start to make working on Anubis my full time job.
{/* truncate */}
Note that this only counts _recurring_ revenue (subscriptions to [BotStopper](/docs/admin/botstopper) and monthly repeating donations). Every one of the one-time donations I get is a gift and I am grateful for them, but I cannot make critically important financial decisions off of sporadic one-time donations.
:::note
All currency figures in this article are USD (United States Dollars) unless denoted otherwise.
:::
Here's the funding breakdown by income stream:
```mermaid
pie title Funding update August 2025
"GitHub Sponsors" : 3500
"Patreon" : 1500
"Liberapay" : 100
"Remaining" : 4800
```
Assuming that some of my private support contracts and other sales effort go through, this will slightly change the shapes of this (a new pie chart segment will emerge for "Manual invoices"), but I am halfway there. This is a huge bar to pass and as it stands right now this is just enough income to pay for my monthly rent (not accounting for tax).
As a reminder, here's the rough plan for the phases I want to hit based on the _recurring_ donation totals:
| Monthly donations | Details |
| :-------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| $0-5,000 per month | Anubis is a nights and weekends project based on how much spare time and energy I have. |
| $5,000-10,000 per month | Anubis gets 1-2 days per week of my time put into it consistently and I go part-time at my dayjob. |
| $10,000-15,000 per month | Anubis becomes my full time job. Features that are currently exclusive to [BotStopper](/docs/admin/botstopper/) start to trickle down to the open source version of Anubis. |
| $15,000 per month and above | I start planning hiring for Techaro. |
If your organization benefits from Anubis, please consider donating to the project in order to make this sustainable. The fewer financial problems I have means the more that Anubis can become better.
## New funding platform: Liberapay
After many comments about the funding options, I have set up [Liberapay](https://liberapay.com/Xe/) as an option to receive donations. Additional funding targets will be added to Liberapay as soon as I hear back from my accountant with more information. All money received via Liberapay goes directly towards supporting the project.
## Next goals
Here's my short term goals for the immediate future:
1. Finish [Thoth](/docs/admin/thoth/) and run a backfill to mass issue API keys.
2. Document and publish the writeup for the multi-region Google Cloud spot instance setup that Thoth is built upon.
3. Release v1.22.0 of Anubis with Traefik support and other important fixes.
4. Continue growing the project into a sustainable business.
5. Work through the [blog backlog](https://github.com/TecharoHQ/anubis/issues?q=is%3Aissue%20state%3Aopen%20label%3Ablog) to document the thoughts behind Anubis and how parts of it work.
Thank you for supporting Anubis! It's only going to get better from here.

View File

@@ -0,0 +1,214 @@
import React, { useState, useEffect, useMemo } from 'react';
import styles from './styles.module.css';
// A helper function to perform SHA-256 hashing.
// It takes a string, encodes it, hashes it, and returns a hex string.
async function sha256(message) {
try {
const msgBuffer = new TextEncoder().encode(message);
const hashBuffer = await crypto.subtle.digest('SHA-256', msgBuffer);
const hashArray = Array.from(new Uint8Array(hashBuffer));
const hashHex = hashArray.map(b => b.toString(16).padStart(2, '0')).join('');
return hashHex;
} catch (error) {
console.error("Hashing failed:", error);
return "Error hashing data";
}
}
// Generates a random hex string of a given byte length
const generateRandomHex = (bytes = 16) => {
const buffer = new Uint8Array(bytes);
crypto.getRandomValues(buffer);
return Array.from(buffer)
.map(byte => byte.toString(16).padStart(2, '0'))
.join('');
};
// Icon components for better visual feedback
const CheckIcon = () => (
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconGreen} fill="none" viewBox="0 0 24 24" stroke="currentColor">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M9 12l2 2 4-4m6 2a9 9 0 11-18 0 9 9 0 0118 0z" />
</svg>
);
const XCircleIcon = () => (
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconRed} fill="none" viewBox="0 0 24 24" stroke="currentColor">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M10 14l2-2m0 0l2-2m-2 2l-2-2m2 2l2 2m7-2a9 9 0 11-18 0 9 9 0 0118 0z" />
</svg>
);
// Main Application Component
export default function App() {
// State for the challenge, initialized with a random 16-byte hex string.
const [challenge, setChallenge] = useState(() => generateRandomHex(16));
// State for the nonce, which is the variable we can change
const [nonce, setNonce] = useState(0);
// State to store the resulting hash
const [hash, setHash] = useState('');
// A flag to indicate if the current hash is the "winning" one
const [isMining, setIsMining] = useState(false);
const [isFound, setIsFound] = useState(false);
// The mining difficulty, i.e., the required number of leading zeros
const difficulty = "00";
// Memoize the combined data to avoid recalculating on every render
const combinedData = useMemo(() => `${challenge}${nonce}`, [challenge, nonce]);
// This effect hook recalculates the hash whenever the combinedData changes.
useEffect(() => {
let isMounted = true;
const calculateHash = async () => {
const calculatedHash = await sha256(combinedData);
if (isMounted) {
setHash(calculatedHash);
setIsFound(calculatedHash.startsWith(difficulty));
}
};
calculateHash();
return () => { isMounted = false; };
}, [combinedData, difficulty]);
// This effect handles the automatic mining process
useEffect(() => {
if (!isMining) return;
let miningNonce = nonce;
let continueMining = true;
const mine = async () => {
while (continueMining) {
const currentData = `${challenge}${miningNonce}`;
const currentHash = await sha256(currentData);
if (currentHash.startsWith(difficulty)) {
setNonce(miningNonce);
setIsMining(false);
break;
}
miningNonce++;
// Update the UI periodically to avoid freezing the browser
if (miningNonce % 100 === 0) {
setNonce(miningNonce);
await new Promise(resolve => setTimeout(resolve, 0)); // Yield to the browser
}
}
};
mine();
return () => {
continueMining = false;
}
}, [isMining, challenge, nonce, difficulty]);
const handleMineClick = () => {
setIsMining(true);
}
const handleStopClick = () => {
setIsMining(false);
}
const handleResetClick = () => {
setIsMining(false);
setNonce(0);
}
const handleNewChallengeClick = () => {
setIsMining(false);
setChallenge(generateRandomHex(16));
setNonce(0);
}
// Helper to render the hash with colored leading characters
const renderHash = () => {
if (!hash) return <span>...</span>;
const prefix = hash.substring(0, difficulty.length);
const suffix = hash.substring(difficulty.length);
const prefixColor = isFound ? styles.hashPrefixGreen : styles.hashPrefixRed;
return (
<>
<span className={`${prefixColor} ${styles.hashPrefix}`}>{prefix}</span>
<span className={styles.hashSuffix}>{suffix}</span>
</>
);
};
return (
<div className={styles.container}>
<div className={styles.innerContainer}>
<div className={styles.grid}>
{/* Challenge Block */}
<div className={styles.block}>
<h2 className={styles.blockTitle}>1. Challenge</h2>
<p className={styles.challengeText}>{challenge}</p>
</div>
{/* Nonce Control Block */}
<div className={styles.block}>
<h2 className={styles.blockTitle}>2. Nonce</h2>
<div className={styles.nonceControls}>
<button onClick={() => setNonce(n => n - 1)} disabled={isMining} className={styles.nonceButton}>
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconSmall} fill="none" viewBox="0 0 24 24" stroke="currentColor"><path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M20 12H4" /></svg>
</button>
<span className={styles.nonceValue}>{nonce}</span>
<button onClick={() => setNonce(n => n + 1)} disabled={isMining} className={styles.nonceButton}>
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconSmall} fill="none" viewBox="0 0 24 24" stroke="currentColor"><path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M12 4v16m8-8H4" /></svg>
</button>
</div>
</div>
{/* Combined Data Block */}
<div className={styles.block}>
<h2 className={styles.blockTitle}>3. Combined Data</h2>
<p className={styles.combinedDataText}>{combinedData}</p>
</div>
</div>
{/* Arrow pointing down */}
<div className={styles.arrowContainer}>
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconGray} fill="none" viewBox="0 0 24 24" stroke="currentColor">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M19 14l-7 7m0 0l-7-7m7 7V3" />
</svg>
</div>
{/* Hash Output Block */}
<div className={`${styles.hashContainer} ${isFound ? styles.hashContainerSuccess : styles.hashContainerError}`}>
<div className={styles.hashContent}>
<div className={styles.hashText}>
<h2 className={styles.blockTitle}>4. Resulting Hash (SHA-256)</h2>
<p className={styles.hashValue}>{renderHash()}</p>
</div>
<div className={styles.hashIcon}>
{isFound ? <CheckIcon /> : <XCircleIcon />}
</div>
</div>
</div>
{/* Mining Controls */}
<div className={styles.buttonContainer}>
{!isMining ? (
<button onClick={handleMineClick} className={`${styles.button} ${styles.buttonCyan}`}>
Auto-Mine
</button>
) : (
<button onClick={handleStopClick} className={`${styles.button} ${styles.buttonYellow}`}>
Stop Mining
</button>
)}
<button onClick={handleNewChallengeClick} className={`${styles.button} ${styles.buttonIndigo}`}>
New Challenge
</button>
<button onClick={handleResetClick} className={`${styles.button} ${styles.buttonGray}`}>
Reset Nonce
</button>
</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,366 @@
/* Main container styles */
.container {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
color: white;
font-family: ui-sans-serif, system-ui, sans-serif;
margin-top: 2rem;
margin-bottom: 2rem;
}
.innerContainer {
width: 100%;
max-width: 56rem;
margin: 0 auto;
}
/* Header styles */
.header {
text-align: center;
margin-bottom: 2.5rem;
}
.title {
font-size: 2.25rem;
font-weight: 700;
color: rgb(34 211 238);
}
.subtitle {
font-size: 1.125rem;
color: rgb(156 163 175);
margin-top: 0.5rem;
}
/* Grid layout styles */
.grid {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 1rem;
align-items: center;
text-align: center;
}
/* Block styles */
.block {
background-color: rgb(31 41 55);
padding: 1.5rem;
border-radius: 0.5rem;
box-shadow: 0 10px 15px -3px rgb(0 0 0 / 0.1), 0 4px 6px -4px rgb(0 0 0 / 0.1);
height: 100%;
display: flex;
flex-direction: column;
justify-content: center;
}
.blockTitle {
font-size: 1.125rem;
font-weight: 600;
color: rgb(34 211 238);
margin-bottom: 0.5rem;
}
.challengeText {
font-size: 0.875rem;
color: rgb(209 213 219);
word-break: break-all;
font-family: ui-monospace, SFMono-Regular, monospace;
}
.combinedDataText {
font-size: 0.875rem;
color: rgb(156 163 175);
word-break: break-all;
font-family: ui-monospace, SFMono-Regular, monospace;
}
/* Nonce control styles */
.nonceControls {
display: flex;
align-items: center;
justify-content: center;
gap: 1rem;
}
.nonceButton {
background-color: rgb(55 65 81);
border-radius: 9999px;
padding: 0.5rem;
transition: background-color 200ms;
}
.nonceButton:hover:not(:disabled) {
background-color: rgb(34 211 238);
}
.nonceButton:disabled {
opacity: 0.5;
cursor: not-allowed;
}
.nonceValue {
font-size: 1.5rem;
font-family: ui-monospace, SFMono-Regular, monospace;
width: 6rem;
text-align: center;
}
/* Icon styles */
.icon {
height: 2rem;
width: 2rem;
}
.iconGreen {
height: 2rem;
width: 2rem;
color: rgb(74 222 128);
}
.iconRed {
height: 2rem;
width: 2rem;
color: rgb(248 113 113);
}
.iconSmall {
height: 1.5rem;
width: 1.5rem;
}
.iconGray {
height: 2.5rem;
width: 2.5rem;
color: rgb(75 85 99);
animation: pulse 2s cubic-bezier(0.4, 0, 0.6, 1) infinite;
}
/* Arrow animation */
@keyframes pulse {
0%,
100% {
opacity: 1;
}
50% {
opacity: 0.5;
}
}
.arrowContainer {
display: flex;
justify-content: center;
margin: 1.5rem 0;
}
/* Hash output styles */
.hashContainer {
padding: 1.5rem;
border-radius: 0.5rem;
box-shadow: 0 10px 15px -3px rgb(0 0 0 / 0.1), 0 4px 6px -4px rgb(0 0 0 / 0.1);
transition: all 300ms;
border: 2px solid;
}
.hashContainerSuccess {
background-color: rgb(20 83 45 / 0.5);
border-color: rgb(74 222 128);
}
.hashContainerError {
background-color: rgb(127 29 29 / 0.5);
border-color: rgb(248 113 113);
}
.hashContent {
display: flex;
flex-direction: column;
align-items: center;
justify-content: space-between;
}
.hashText {
text-align: center;
}
.hashTextLg {
text-align: left;
}
.hashValue {
font-size: 0.875rem;
word-break: break-all;
}
.hashValueLg {
font-size: 1rem;
word-break: break-all;
}
.hashIcon {
margin-top: 1rem;
}
.hashIconLg {
margin-top: 0;
}
/* Hash highlighting */
.hashPrefix {
font-family: ui-monospace, SFMono-Regular, monospace;
}
.hashPrefixGreen {
color: rgb(74 222 128);
}
.hashPrefixRed {
color: rgb(248 113 113);
}
.hashSuffix {
font-family: ui-monospace, SFMono-Regular, monospace;
color: rgb(156 163 175);
}
/* Button styles */
.buttonContainer {
margin-top: 2rem;
display: flex;
align-items: center;
justify-content: center;
gap: 1rem;
}
.button {
font-weight: 700;
padding: 0.75rem 1.5rem;
border-radius: 0.5rem;
transition: transform 150ms;
}
.button:hover {
transform: scale(1.05);
}
.buttonCyan {
background-color: rgb(8 145 178);
color: white;
}
.buttonCyan:hover {
background-color: rgb(6 182 212);
}
.buttonYellow {
background-color: rgb(202 138 4);
color: white;
}
.buttonYellow:hover {
background-color: rgb(245 158 11);
}
.buttonIndigo {
background-color: rgb(79 70 229);
color: white;
}
.buttonIndigo:hover {
background-color: rgb(99 102 241);
}
.buttonGray {
background-color: rgb(55 65 81);
color: white;
}
.buttonGray:hover {
background-color: rgb(75 85 99);
}
/* Responsive styles */
@media (min-width: 768px) {
.title {
font-size: 3rem;
}
.grid {
grid-template-columns: repeat(3, 1fr);
gap: 1rem;
}
.hashContent {
flex-direction: row;
}
.hashText {
text-align: left;
}
.hashValue {
font-size: 1rem;
}
.hashIcon {
margin-top: 0;
}
}
@media (max-width: 767px) {
.grid {
display: flex;
flex-direction: column;
gap: 1rem;
}
}
@media (prefers-color-scheme: light) {
.block {
background-color: oklch(93% 0.034 272.788);
}
.challengeText {
color: oklch(12.9% 0.042 264.695);
}
.combinedDataText {
color: oklch(12.9% 0.042 264.695);
}
.nonceButton {
background-color: oklch(88.2% 0.059 254.128);
}
.nonceValue {
color: oklch(12.9% 0.042 264.695);
}
.blockTitle {
color: oklch(45% 0.085 224.283);
}
.hashContainerSuccess {
background-color: oklch(95% 0.052 163.051);
border-color: rgb(74 222 128);
}
.hashContainerError {
background-color: oklch(94.1% 0.03 12.58);
border-color: rgb(248 113 113);
}
.hashPrefixGreen {
color: oklch(53.2% 0.157 131.589);
font-weight: 600;
}
.hashPrefixRed {
color: oklch(45.5% 0.188 13.697);
}
.hashSuffix {
color: oklch(27.9% 0.041 260.031);
}
}

View File

@@ -0,0 +1,129 @@
---
slug: 2025/cpu-core-odd
title: Sometimes CPU cores are odd
description: "TL;DR: all the assumptions you have about processor design are wrong and if you are unlucky you will never run into problems that users do through sheer chance."
authors: [xe]
tags:
- bugfix
- implementation
image: parc-dsilence.webp
---
import ProofOfWorkDiagram from "./ProofOfWorkDiagram";
![](./parc-dsilence.webp)
One of the biggest lessons that I've learned in my career is that all software has bugs, and the more complicated your software gets the more complicated your bugs get. A lot of the time those bugs will be fairly obvious and easy to spot, validate, and replicate. Sometimes, the process of fixing it will uncover your core assumptions about how things work in ways that will leave you feeling like you just got trolled.
Today I'm going to talk about a single line fix that prevents people on a large number of devices from having weird irreproducible issues with Anubis rejecting people when it frankly shouldn't. Stick around, it's gonna be a wild ride.
{/* truncate */}
## How this happened
Anubis is a web application firewall that tries to make sure that the client is a browser. It uses a few [challenge methods](/docs/admin/configuration/challenges/) to do this determination, but the main method is the [proof of work](/docs/admin/configuration/challenges/proof-of-work/) challenge which makes clients grind away at cryptographic checksums in order to rate limit clients from connecting too eagerly.
:::note
In retrospect implementing the proof of work challenge may have been a mistake and it's likely to be supplanted by things like [Proof of React](https://github.com/TecharoHQ/anubis/pull/1038) or other methods that have yet to be developed. Your patience and polite behaviour in the bug tracker is appreciated.
:::
In order to make sure the proof of work challenge screen _goes away as fast as possible_, the [worker code](https://github.com/TecharoHQ/anubis/tree/main/web/js/worker) is optimized within an inch of its digital life. One of the main ways that this code is optimized is with how it's run. Over the last 10-20 years, the main way that CPUs have gotten fast is via increasing multicore performance. Anubis tries to make sure that it can use as many cores as possible in order to take advantage of your device's CPU as much as it can.
This strategy sometimes has some issues though, for one Firefox seems to get _much slower_ if you have Anubis try to absolutely saturate all of the cores on the system. It also has a fairly high overhead between JavaScript JIT code and [WebCrypto](https://developer.mozilla.org/en-US/docs/Web/API/Web_Crypto_API). I did some testing and found out that Firefox's point of diminishing returns was about half of the CPU cores.
## Another "invalid response" bug
One of the complaints I've been getting from users and administrators using Anubis is that they've been running into issues where users get randomly rejected with an error message only saying "invalid response". This happens when the challenge validating process fails. This issue has been blocking the release of the next version of Anubis.
In order to demonstrate this better, I've made a little interactive diagram for the proof of work process:
<ProofOfWorkDiagram />
I've fixed a lot of the easy bugs in Anubis by this point. A lot of what's left is the hard bugs, but also specifically the kinds of hard bugs that involve weird hardware configurations. In order to try and catch these issues before software hits prod, I test Anubis against a bunch of hardware I have locally. Any issues I find and fix before software ships are issues that you don't hit in production.
Let's consider [the line of code](https://github.com/TecharoHQ/anubis/blob/main/web/js/algorithms/fast.mjs) that was causing this issue:
```js
threads = Math.max(navigator.hardwareConcurrency / 2, 1),
```
This is intended to make your browser spawn a proof of work worker for _half_ of your available CPU cores. If you only have one CPU core, you should only have one worker. Each thread is given this number of threads and uses that to increment the nonce so that each thread doesn't try to find a solution that another worker has already performed.
One of the subtle problems here is that all of the parts of this assume that the thread ID and nonce are integers without a decimal portion. Famously, [all JavaScript numbers are IEEE 754 floating point numbers](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number). Surely there wouldn't be a case where the thread count could be a _decimal_ number, right?
Here's all the devices I use to test Anubis _and their core counts_:
| Device Name | Core Count |
| :--------------------------- | :--------- |
| MacBook Pro M3 Max | 16 |
| MacBook Pro M4 Max | 16 |
| AMD Ryzen 9 7950x3D | 32 |
| Google Pixel 9a (GrapheneOS) | 8 |
| iPhone 15 Pro Max | 6 |
| iPad Pro (M1) | 8 |
| iPad mini | 6 |
| Steam Deck | 8 |
| Core i5 10600 (homelab) | 12 |
| ROG Ally | 16 |
Notice something? All of those devices have an _even_ number of cores. Some devices such as the [Pixel 8 Pro](https://www.gsmarena.com/google_pixel_8_pro-12545.php) have an _odd_ number of cores. So what happens with that line of code as the JavaScript engine evaluates it?
Let's replace the [`navigator.hardwareConcurrency`](https://developer.mozilla.org/en-US/docs/Web/API/Navigator/hardwareConcurrency) with the Pixel 8 Pro's 9 cores:
```js
threads = Math.max(9 / 2, 1),
```
Then divide it by two:
```js
threads = Math.max(4.5, 1),
```
Oops, that's not ideal. However `4.5` is bigger than `1`, so [`Math.max`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/max) returns that:
```js
threads = 4.5,
```
This means that each time the proof of work equation is calculated, there is a 50% chance that a valid solution would include a nonce with a decimal portion in it. If the client finds a solution with such a nonce, then it would think the client was successful and submit the solution to the server, but the server only expects whole numbers back so it rejects that as an invalid response.
I keep telling more junior people that when you have the weirdest, most inconsistent bugs in software that it's going to boil down to the dumbest possible thing you can possibly imagine. People don't believe me, then they encounter bugs like this. Then they suddenly believe me.
Here is the fix:
```js
threads = Math.trunc(Math.max(navigator.hardwareConcurrency / 2, 1)),
```
This uses [`Math.trunc`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/trunc) to truncate away the decimal portion so that the Pixel 8 Pro has `4` workers instead of `4.5` workers.
## Today I learned this was possible
This was a total "today I learned" moment. I didn't actually think that hardware vendors shipped processors with an odd number of cores, however if you look at the core geometry of the Pixel 8 Pro, it has _three_ tiers of processor cores:
| Core type | Core model | Number |
| :----------------- | :------------------- | :----- |
| High performance | 3 Ghz Cortex X3 | 1 |
| Medium performance | 2.45 Ghz Cortex A715 | 4 |
| High efficiency | 2.15 Cortex A510 | 4 |
| Total | | 9 |
I guess every assumption that developers have about CPU design is probably wrong.
This probably isn't helped by the fact that for most of my career, the core count in phones has been largely irrelevant and most of the desktop / laptop CPUs I've had (where core count does matter) uses [simultaneous multithreading](https://en.wikipedia.org/wiki/Simultaneous_multithreading) to "multiply" the core count by two.
The client side fix is a bit of an "emergency stop" button to try and mitigate the badness as early as possible. In general I'm quite aware of the terrible UX involved with this flow failing and I'm still noodling through ways to make that UX better and easier for users / administrators to debug.
I'm looking into the following:
1. This could have been prevented on the server side by doing less strict input validation in compliance with [Postel's Law](https://en.wikipedia.org/wiki/Robustness_principle). I feel nervous about making such a security-sensitive endpoint _more liberal_ with the inputs it can accept, but it may be fine? I need to consult with a security expert.
2. Showing an encrypted error message on the "invalid response" page so that the user and administrator can work together to fix or report the issue. I remember Google doing this at least once, but I can't recall where I've seen it in the past. Either way, this is probably the most robust method even though it would require developing some additional tooling. I think it would be worth it.
I'm likely going to go with the second option. I will need to figure out a good flow for this. It's likely going to involve [age](https://github.com/FiloSottile/age). I'll say more about this when I have more to say.
In the meantime though, looks like I need to expense a used Pixel 8 Pro to add to the testing jungle for Anubis. If anyone has a deal out there, please let me know!
Thank you to the people that have been polite and helpful when trying to root cause and fix this issue.

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

View File

@@ -0,0 +1,75 @@
---
slug: 2025/file-abuse-reports
title: Taking steps to end abusive traffic from cloud providers
description: "Learn how to effectively file abuse reports with cloud providers to stop malicious traffic at its source and protect your services from automated abuse."
authors: [xe]
tags: [abuse, cloud, security, networking]
image: goose-pond.webp
---
![A peaceful goose pond](./goose-pond.webp)
As part of Anubis's ongoing development, I've been working to reduce friction for legitimate users by minimizing unnecessary challenge pages. While this improves the user experience, it can potentially expose services to increased abuse from public cloud infrastructure. To help administrators better protect their services, I want to share my strategies for filing abuse reports with IP space owners, enabling us to address malicious scraping at its source.
{/* truncate */}
In general, there are two kinds of IP addresses:
- Residential IP addresses: IP addresses that are allocated to residential customers such as home internet connections and cellular data plans. These IP addresses are increasingly shared between customers due to technologies like [CGNAT](https://en.wikipedia.org/wiki/Carrier-grade_NAT).
- Commercial IP addresses: IP addresses that are allocated to commercial customers such as cloud providers, VPS providers, root server providers, and other such business to business companies. These IP addresses are almost always statically allocated to one customer for a very long period of time (typically the lifetime of the server unless they are using things like dedicated IP addresses).
In general, filing abuse reports to residential IP addresses is a waste of time. The administrators do appreciate knowing what kinds of abusive traffic is causing grief, but many times the users of those IP addresses don't know that their computer is sending abusive traffic to your services. A lot of malware botnets that used to be used with DDOS for hire services are now being used as residential proxies. Those "free VPN apps" are almost certainly making you pay for your usage by making your computer a zombie in a botnet. At some level I really respect the hustle as they manage to sell other people's bandwidth for rates as ludicrous as $1.00 per gigabyte ingressed and egressed.
:::note
Keep in mind, I'm talking about the things you can find by searching "free VPN", not infrastructure for the public good like the Tor browser or I2P.
:::
What you should really focus on is traffic from commercial IP addresses, such as cloud providers. That's a case where the cloud customer is in direct violation of the acceptable use policy of the provider. Filing abuse reports gets the abuse team of the cloud provider to reach out to that customer and demand corrective action under threat of contractual violence.
## How to make an abuse report
In general, the best abuse reports contain the following information:
- Time of abusive requests.
- IP address, User-Agent header, or other unique identifiers that can help the abuse team educate the customer about their misbehaving infrastructure.
- Does the abusive IP address request robots.txt? If not, be sure to include that information.
- A brief description of the impact to your system such as high system load, pages not rendering, or database system crashes. This helps the provider establish the fact that their customer is causing you measurable harm.
- Context as to what your service is, what it does, and why they should care.
For example, let's say that someone was giving the Anubis docs a series of requests that caused the server to fall over and experience extended downtime. Here's what I would write to the abuse contact:
> Hello,
>
> I have received abusive traffic from one of your customers that has resulted in a denial of service to the users of the Anubis documentation website. Anubis is a web application firewall that administrators use to protect their websites against mass scraping and this documentation website helps administrators get started.
>
> On or about Thursday, October 30th at 04:00 UTC, A flurry of requests from the IP range `127.34.0.0/24` started to hit the `/admin/` routes, which caused unreasonable database load and ended up crashing PostgreSQL. This caused the documentation website to go down for three hours as it happened while the administrators were asleep. Based on logs, this caused 353 distinct users to not be able to load the documentation and the users filed bugs about it.
>
> I have attached the HTTP frontend logs for the abusive requests from your IP range. To protect our systems in the meantime while we perform additional hardening, I have blocked that IP address range in both our IP firewall and web application firewall configuration. Based on these logs, your customer seems to not have requested the standard `robots.txt` file, which includes instructions to deny access to those routes.
>
> Please let me know what other information you need on your end.
>
> Sincerely,
>
> [normal email signature]
Then in order to figure out where to send it, look the IP addresses up in the `whois` database. For example, if you want to find the abuse contact for the IP address `1.1.1.1`, use the [whois command](https://packages.debian.org/sid/whois) to find the abuse contact:
```
$ whois 1.1.1.1 | grep -i abuse
% Abuse contact for '1.1.1.0 - 1.1.1.255' is 'helpdesk@apnic.net'
abuse-c: AA1412-AP
remarks: All Cloudflare abuse reporting can be done via
remarks: resolver-abuse@cloudflare.com
abuse-mailbox: helpdesk@apnic.net
role: ABUSE APNICRANDNETAU
abuse-mailbox: helpdesk@apnic.net
mnt-by: APNIC-ABUSE
```
The abuse contact will be named either `abuse-c` or `abuse-mailbox`. For greatest effect, I suggest including all listed email addresses in your email to the abuse contact.
Once you send your email, you should expect a response within 2 business days at most. If they don't get back to you, please feel free to [contact me](https://xeiaso.net/contact/) so that the default set of Anubis rules can be edited according to patterns I'm seeing across the ecosystem.
Just remember that many cloud providers do not know how bad the scraping problem is. Filing abuse complaints makes it their problem. They don't want it to be their problem.

View File

@@ -1 +0,0 @@

View File

@@ -11,6 +11,475 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
- Add iplist2rule tool that lets admins turn an IP address blocklist into an Anubis ruleset.
- Add Polish locale ([#1292](https://github.com/TecharoHQ/anubis/pull/1309))
<!-- This changes the project to: -->
## v1.24.0: Y'shtola Rhul
Anubis is back and better than ever! Lots of minor fixes with some big ones interspersed.
- Fix panic when validating challenges after privacy-mode browsers strip headers and the follow-up request matches an `ALLOW` threshold.
- Expose WEIGHT rule matches as Prometheus metrics.
- Allow more OCI registry clients [based on feedback](https://github.com/TecharoHQ/anubis/pull/1253#issuecomment-3506744184).
- Expose services directory in the embedded `(data)` filesystem.
- Add Ukrainian locale ([#1044](https://github.com/TecharoHQ/anubis/pull/1044)).
- Allow Renovate as an OCI registry client.
- Properly handle 4in6 addresses so that IP matching works with those addresses.
- Add support to simple Valkey/Redis cluster mode
- Open Graph passthrough now reuses the configured target Host/SNI/TLS settings, so metadata fetches succeed when the upstream certificate differs from the public domain. ([1283](https://github.com/TecharoHQ/anubis/pull/1283))
- Stabilize the CVE-2025-24369 regression test by always submitting an invalid proof instead of relying on random POW failures.
- Refine the check that ensures the presence of the Accept header to avoid breaking docker clients.
- Removed rules intended to reward actual browsers due to abuse in the wild.
### Dataset poisoning
Anubis has the ability to engage in [dataset poisoning attacks](https://www.anthropic.com/research/small-samples-poison) using the [dataset poisoning subsystem](./admin/honeypot/overview.mdx). This allows every Anubis instance to be a honeypot to attract and flag abusive scrapers so that no administrator action is required to ban them.
There is much more information about this feature in [the dataset poisoning subsystem documentation](./admin/honeypot/overview.mdx). Administrators that are interested in learning how this feature works should consult that documentation.
### Deprecate `report_as` in challenge configuration
Previously Anubis let you lie to users about the difficulty of a challenge to interfere with operators of malicious scrapers as a psychological attack:
```yaml
bots:
# Punish any bot with "bot" in the user-agent string
# This is known to have a high false-positive rate, use at your own risk
- name: generic-bot-catchall
user_agent_regex: (?i:bot|crawler)
action: CHALLENGE
challenge:
difficulty: 16 # impossible
report_as: 4 # lie to the operator
algorithm: slow # intentionally waste CPU cycles and time
```
This has turned out to be a bad idea because it has caused massive user experience problems and has been removed. If you are using this setting, you will get a warning in your logs like this:
```json
{
"time": "2025-11-25T23:10:31.092201549-05:00",
"level": "WARN",
"source": {
"function": "github.com/TecharoHQ/anubis/lib/policy.ParseConfig",
"file": "/home/xe/code/TecharoHQ/anubis/lib/policy/policy.go",
"line": 201
},
"msg": "use of deprecated report_as setting detected, please remove this from your policy file when possible",
"at": "config-validate",
"name": "mild-suspicion"
}
```
To remove this warning, remove this setting from your policy file.
### Logging customization
Anubis now supports the ability to log to multiple backends ("sinks"). This allows you to have Anubis [log to a file](./admin/policies.mdx#file-sink) instead of just logging to standard out. You can also customize the [logging level](./admin/policies.mdx#log-levels) in the policy file:
```yaml
logging:
level: "warn" # much less verbose logging
sink: file # log to a file
parameters:
file: "./var/anubis.log"
maxBackups: 3 # keep at least 3 old copies
maxBytes: 67108864 # each file can have up to 64 Mi of logs
maxAge: 7 # rotate files out every n days
oldFileTimeFormat: 2006-01-02T15-04-05 # RFC 3339-ish
compress: true # gzip-compress old log files
useLocalTime: false # timezone for rotated files is UTC
```
Additionally, information about [how Anubis uses each logging level](./admin/policies.mdx#log-levels) has been added to the documentation.
### DNS Features
- CEL expressions for:
- FCrDNS checks
- Forward DNS queries
- Reverse DNS queries
- `arpaReverseIP` to transform IPv4/6 addresses into ARPA reverse IP notation.
- `regexSafe` to escape regex special characters (useful for including `remoteAddress` or headers in regular expressions).
- DNS cache and other optimizations to minimize unnecessary DNS queries.
The DNS cache TTL can be changed in the bots config like this:
```yaml
dns_ttl:
forward: 600
reverse: 600
```
The default value for both forward and reverse queries is 300 seconds.
The `verifyFCrDNS` CEL function has two overloads:
- `(addr)`
Simply verifies that the remote side has PTR records pointing to the target address.
- `(addr, ptrPattern)`
Verifies that the remote side refers to a specific domain and that this domain points to the target IP.
## v1.23.1: Lyse Hext - Echo 1
- Fix `SERVE_ROBOTS_TXT` setting after the double slash fix broke it.
### Potentially breaking changes
#### Remove default Tencent Cloud block rule
v1.23.0 added a default rule to block Tencent Cloud. After an email from their abuse team where they promised to take action to clean up their reputation, I have removed the default block rule. If this network causes you problems, please contact [abuse@tencent.com](mailto:abuse@tencent.com) and supply the following information:
- Time of abusive requests.
- IP address, User-Agent header, or other unique identifiers that can help the abuse team educate the customer about their misbehaving infrastructure.
- Does the abusive IP address request robots.txt? If not, be sure to include that information.
- A brief description of the impact to your system such as high system load, pages not rendering, or database system crashes. This helps the provider establish the fact that their customer is causing you measurable harm.
- Context as to what your service is, what it does, and why they should care.
Mention that you are using Anubis or BotStopper to protect your services. If they do not respond to you, please [contact me](https://xeiaso.net/contact) as soon as possible.
#### Docker / OCI registry clients
Anubis v1.23.0 accidentally blocked Docker / OCI registry clients. In order to explicitly allow them, add an import for `(data)/clients/docker-client.yaml`:
```yaml
bots:
- import: (data)/meta/default-config.yaml
- import: (data)/clients/docker-client.yaml
```
This is technically a regression as these clients used to work in Anubis v1.22.0, however it is allowable to make this opt-in as most websites do not expect to be serving Docker / OCI registry client traffic.
## v1.23.0: Lyse Hext
- Add default tencent cloud DENY rule.
- Added `(data)/meta/default-config.yaml` for importing the entire default configuration at once.
- Add `-custom-real-ip-header` flag to get the original request IP from a different header than `x-real-ip`.
- Add `contentLength` variable to bot expressions.
- Add `COOKIE_SAME_SITE_MODE` to force anubis cookies SameSite value, and downgrade automatically from `None` to `Lax` if cookie is insecure.
- Fix lock convoy problem in decaymap ([#1103](https://github.com/TecharoHQ/anubis/issues/1103)).
- Fix lock convoy problem in bbolt by implementing the actor pattern ([#1103](https://github.com/TecharoHQ/anubis/issues/1103)).
- Remove bbolt actorify implementation due to causing production issues.
- Document missing environment variables in installation guide: `SLOG_LEVEL`, `COOKIE_PREFIX`, `FORCED_LANGUAGE`, and `TARGET_DISABLE_KEEPALIVE` ([#1086](https://github.com/TecharoHQ/anubis/pull/1086)).
- Add validation warning when persistent storage is used without setting signing keys.
- Fixed `robots2policy` to properly group consecutive user agents into `any:` instead of only processing the last one ([#925](https://github.com/TecharoHQ/anubis/pull/925)).
- Make the `fast` algorithm prefer purejs when running in an insecure context.
- Add the [`s3api` storage backend](./admin/policies.mdx#s3api) to allow Anubis to use S3 API compatible object storage as its storage backend.
- Fix a "stutter" in the cookie name prefix so the auth cookie is named `techaro.lol-anubis-auth` instead of `techaro.lol-anubis-auth-auth`.
- Make `cmd/containerbuild` support commas for separating elements of the `--docker-tags` argument as well as newlines.
- Add the `DIFFICULTY_IN_JWT` option, which allows one to add the `difficulty` field in the JWT claims which indicates the difficulty of the token ([#1063](https://github.com/TecharoHQ/anubis/pull/1063)).
- Ported the client-side JS to TypeScript to avoid egregious errors in the future.
- Fixes concurrency problems with very old browsers ([#1082](https://github.com/TecharoHQ/anubis/issues/1082)).
- Randomly use the Refresh header instead of the meta refresh tag in the metarefresh challenge.
- Update OpenRC service to truncate the runtime directory before starting Anubis.
- Make the git client profile more strictly match how the git client behaves.
- Make the default configuration reward users using normal browsers.
- Allow multiple consecutive slashes in a row in application paths ([#754](https://github.com/TecharoHQ/anubis/issues/754)).
- Add option to set `targetSNI` to special keyword 'auto' to indicate that it should be automatically set to the request Host name ([424](https://github.com/TecharoHQ/anubis/issues/424)).
- The Preact challenge has been removed from the default configuration. It will be deprecated in the future.
- An open redirect when in subrequest mode has been fixed.
### Potentially breaking changes
#### Multiple checks at once has and-like semantics instead of or-like semantics
Anubis lets you stack multiple checks at once with blocks like this:
```yaml
name: allow-prometheus
action: ALLOW
user_agent_regex: ^prometheus-probe$
remote_addresses:
- 192.168.2.0/24
```
Previously, this only returned ALLOW if _any one_ of the conditions matched. This behaviour has changed to only return ALLOW if _all_ of the conditions match. I expect this to have some issues with user configs, however this fix is grave enough that it's worth the risk of breaking configs. If this bites you, please let me know so we can make an escape hatch.
### Better error messages
In order to make it easier for legitimate clients to debug issues with their browser configuration and Anubis, Anubis will emit internal error detail in base 64 so that administrators can chase down issues. Future versions of this may also include a variant that encrypts the error detail messages.
### Bug Fixes
Sometimes the enhanced temporal assurance in [#1038](https://github.com/TecharoHQ/anubis/pull/1038) and [#1068](https://github.com/TecharoHQ/anubis/pull/1068) could backfire because Chromium and its ilk randomize the amount of time they wait in order to avoid a timing side channel attack. This has been fixed by both increasing the amount of time a client has to wait for the metarefresh and preact challenges as well as making the server side logic more permissive.
## v1.22.0: Yda Hext
> Someone has to make an effort at reconciliation if these conflicts are ever going to end.
In this release, we finally fix the odd number of CPU cores bug, pave the way for lighter weight challenges, make Anubis more adaptable, and more.
### Big ticket items
#### Proof of React challenge
A new ["proof of React"](./admin/configuration/challenges/preact.mdx) has been added. It runs a simple app in React that has several chained hooks. It is much more lightweight than the proof of work check.
#### Smaller features
- The [`segments`](./admin/configuration/expressions.mdx#segments) function was added for splitting a path into its slash-separated segments.
- Added possibility to disable HTTP keep-alive to support backends not properly handling it.
- When issuing a challenge, Anubis stores information about that challenge into the store. That stored information is later used to validate challenge responses. This works around nondeterminism in bot rules. ([#917](https://github.com/TecharoHQ/anubis/issues/917))
- One of the biggest sources of lag in Firefox has been eliminated: the use of WebCrypto. Now whenever Anubis detects the client is using Firefox (or Pale Moon), it will swap over to a pure-JS implementation of SHA-256 for speed.
- Proof of work solving has had a complete overhaul and rethink based on feedback from browser engine developers, frontend experts, and overall performance profiling.
- Optimize the performance of the pure-JS Anubis solver.
- Web Workers are stored as dedicated JavaScript files in `static/js/workers/*.mjs`.
- Pave the way for non-SHA256 solver methods and eventually one that uses WebAssembly (or WebAssembly code compiled to JS for those that disable WebAssembly).
- Legacy JavaScript code has been eliminated.
- When parsing [Open Graph tags](./admin/configuration/open-graph.mdx), add any URLs found in the responses to a temporary "allow cache" so that social preview images work.
- The hard dependency on WebCrypto has been removed, allowing a proof of work challenge to work over plain (unencrypted) HTTP.
- The Anubis version number is put in the footer of every page.
- Add a default block rule for Huawei Cloud.
- Add a default block rule for Alibaba Cloud.
- Added support to use Traefik forwardAuth middleware.
- Add X-Request-URI support so that Subrequest Authentication has path support.
- Added glob matching for `REDIRECT_DOMAINS`. You can pass `*.bugs.techaro.lol` to allow redirecting to anything ending with `.bugs.techaro.lol`. There is a limit of 4 wildcards.
### Fixes
#### Odd numbers of CPU cores are properly supported
Some phones have an odd number of CPU cores. This caused [interesting issues](https://anubis.techaro.lol/blog/2025/cpu-core-odd). This was fixed by [using `Math.trunc` to convert the number of CPU cores back into an integer](https://github.com/TecharoHQ/anubis/issues/1043).
#### Smaller fixes
- A standard library HTTP server log message about HTTP pipelining not working has been filtered out of Anubis' logs. There is no action that can be taken about it.
- Added a missing link to the Caddy installation environment in the installation documentation.
- Downstream consumers can change the default [log/slog#Logger](https://pkg.go.dev/log/slog#Logger) instance that Anubis uses by setting `opts.Logger` to your slog instance of choice ([#864](https://github.com/TecharoHQ/anubis/issues/864)).
- The [Thoth client](https://anubis.techaro.lol/docs/admin/thoth) is now public in the repo instead of being an internal package.
- [Custom-AsyncHttpClient](https://github.com/AsyncHttpClient/async-http-client)'s default User-Agent has an increased weight by default ([#852](https://github.com/TecharoHQ/anubis/issues/852)).
- Add option for replacing the default explanation text with a custom one ([#747](https://github.com/TecharoHQ/anubis/pull/747))
- The contact email in the LibreJS header has been changed.
- Firefox for Android support has been fixed by embedding the challenge ID into the pass-challenge route. This also fixes some inconsistent issues with other mobile browsers.
- The default `favicon` pattern in `data/common/keep-internet-working.yaml` has been updated to permit requests for png/gif/jpg/svg files as well as ico.
- The `--cookie-prefix` flag has been fixed so that it is fully respected.
- The default patterns in `data/common/keep-internet-working.yaml` have been updated to appropriately escape the '.' character in the regular expression patterns.
- Add optional restrictions for JWT based on the value of a header ([#697](https://github.com/TecharoHQ/anubis/pull/697))
- The word "hack" has been removed from the translation strings for Anubis due to incidents involving people misunderstanding that word and sending particularly horrible things to the project lead over email.
- Bump AI-robots.txt to version 1.39
- Inject adversarial input to break AI coding assistants.
- Add better logging when using Subrequest Authentication.
### Security-relevant changes
- Add a server-side check for the meta-refresh challenge that makes sure clients have waited for at least 95% of the time that they should.
#### Fix potential double-spend for challenges
Anubis operates by issuing a challenge and having the client present a solution for that challenge. Challenges are identified by a unique UUID, which is stored in the database.
The problem is that a challenge could potentially be used twice by a dedicated attacker making a targeted attack against Anubis. Challenge records did not have a "spent" or "used" field. In total, a dedicated attacker could solve a challenge once and reuse that solution across multiple sessions in order to mint additional tokens.
This was fixed by adding a "spent" field to challenges in the data store. When a challenge is solved, that "spent" field gets set to `true`. If a future attempt to solve this challenge is observed, it gets rejected.
With the advent of store based challenge issuance in [#749](https://github.com/TecharoHQ/anubis/pull/749), this means that these challenge IDs are [only good for 30 minutes](https://github.com/TecharoHQ/anubis/blob/e8dfff635015d6c906dddd49cb0eaf591326092a/lib/anubis.go#L130-L135d). Websites using the most recent version of Anubis have limited exposure to this problem.
Websites using older versions of Anubis have a much more increased exposure to this problem and are encouraged to keep this software updated as often and as frequently as possible.
Thanks to [@taviso](https://github.com/taviso) for reporting this issue.
### Breaking changes
- The "slow" frontend solver has been removed in order to reduce maintenance burden. Any existing uses of it will still work, but issue a warning upon startup asking administrators to upgrade to the "fast" frontend solver.
- The legacy JSON based policy file example has been removed and all documentation for how to write a policy file in JSON has been deleted. JSON based policy files will still work, but YAML is the superior option for Anubis configuration.
### New Locales
- Lithuanian [#972](https://github.com/TecharoHQ/anubis/pull/972)
- Vietnamese [#926](https://github.com/TecharoHQ/anubis/pull/926)
## v1.21.3: Minfilia Warde - Echo 3
### Added
#### New locales
Anubis now supports these new languages:
- [Swedish](https://github.com/TecharoHQ/anubis/pull/913)
### Fixes
#### Fixes a problem with nonstandard URLs and redirects
Fixes [GHSA-jhjj-2g64-px7c](https://github.com/TecharoHQ/anubis/security/advisories/GHSA-jhjj-2g64-px7c).
This could allow an attacker to craft an Anubis pass-challenge URL that forces a redirect to nonstandard URLs, such as the `javascript:` scheme which executes arbitrary JavaScript code in a browser context when the user clicks the "Try again" button.
This has been fixed by disallowing any URLs without the scheme `http` or `https`.
Additionally, the "Try again" button has been fixed to completely ignore the user-supplied redirect location. It now redirects to the home page (`/`).
## v1.21.2: Minfilia Warde - Echo 2
This contained an incomplete fix for [GHSA-jhjj-2g64-px7c](https://github.com/TecharoHQ/anubis/security/advisories/GHSA-jhjj-2g64-px7c). Do not use this version.
## v1.21.1: Minfilia Warde - Echo 1
- Expired records are now properly removed from bbolt databases ([#848](https://github.com/TecharoHQ/anubis/pull/848)).
- Fix hanging on service restart ([#853](https://github.com/TecharoHQ/anubis/issues/853))
### Added
Anubis now supports the [`missingHeader`](./admin/configuration/expressions.mdx#missingHeader) to assert the absence of headers in requests.
#### New locales
Anubis now supports these new languages:
- [Czech](https://github.com/TecharoHQ/anubis/pull/849)
- [Finnish](https://github.com/TecharoHQ/anubis/pull/863)
- [Norwegian Bokmål](https://github.com/TecharoHQ/anubis/pull/855)
- [Norwegian Nynorsk](https://github.com/TecharoHQ/anubis/pull/855)
- [Russian](https://github.com/TecharoHQ/anubis/pull/882)
### Fixes
#### Fix ["error: can't get challenge"](https://github.com/TecharoHQ/anubis/issues/869) when details about a challenge can't be found in the server side state
v1.21.0 changed the core challenge flow to maintain information about challenges on the server side instead of only doing them via stateless idempotent generation functions and relying on details to not change. There was a subtle bug introduced in this change: if a client has an unknown challenge ID set in its test cookie, Anubis will clear that cookie and then throw an HTTP 500 error.
This has been fixed by making Anubis throw a new challenge page instead.
#### Fix event loop thrashing when solving a proof of work challenge
Previously the "fast" proof of work solver had a fragment of JavaScript that attempted to only post an update about proof of work progress to the main browser window every 1024 iterations. This fragment of JavaScript was subtly incorrect in a way that passed review but actually made the workers send an update back to the main thread every iteration. This caused a pileup of unhandled async calls (similar to a socket accept() backlog pileup in Unix) that caused stack space exhaustion.
This has been fixed in the following ways:
1. The complicated boolean logic has been totally removed in favour of a worker-local iteration counter.
2. The progress bar is updated by worker `0` instead of all workers.
Hopefully this should limit the event loop thrashing and let ia32 browsers (as well as any environment with a smaller stack size than amd64 and aarch64 seem to have) function normally when processing Anubis proof of work challenges.
#### Fix potential memory leak when discovering a solution
In some cases, the parallel solution finder in Anubis could cause all of the worker promises to leak due to the fact the promises were being improperly terminated. This was fixed by having Anubis debounce worker termination instead of allowing it to potentially recurse infinitely.
## v1.21.0: Minfilia Warde
> Please, be at ease. You are among friends here.
In this release, Anubis becomes internationalized, gains the ability to use system load as input to issuing challenges, finally fixes the "invalid response" after "success" bug, and more! Please read these notes before upgrading as the changes are big enough that administrators should take action to ensure that the upgrade goes smoothly.
### Big ticket changes
The biggest change is that the ["invalid response" after "success" bug](https://github.com/TecharoHQ/anubis/issues/564) is now finally fixed for good by totally rewriting how Anubis' challenge issuance flow works. Instead of generating challenge strings from request metadata (under the assumption that the values being compared against are stable), Anubis now generates random data for each challenge. This data is stored in the active [storage backend](./admin/policies.mdx#storage-backends) for up to 30 minutes. This also fixes [#746](https://github.com/TecharoHQ/anubis/issues/746) and other similar instances of this issue.
In order to reduce confusion, the "Success" interstitial that shows up when you pass a proof of work challenge has been removed.
#### Storage
Anubis now is able to store things persistently [in memory](./admin/policies.mdx#memory), [on the disk](./admin/policies.mdx#bbolt), or [in Valkey](./admin/policies.mdx#valkey) (this includes other compatible software). By default Anubis uses the in-memory backend. If you have an environment with mutable storage (even if it is temporary), be sure to configure the [`bbolt`](./admin/policies.mdx#bbolt) storage backend.
#### Localization
Anubis now supports localized responses. Locales can be added in [lib/localization/locales/](https://github.com/TecharoHQ/anubis/tree/main/lib/localization/locales). This release includes support for the following languages:
- [Brazilian Portugese](https://github.com/TecharoHQ/anubis/pull/726)
- [Chinese (Simplified)](https://github.com/TecharoHQ/anubis/pull/774)
- [Chinese (Traditional)](https://github.com/TecharoHQ/anubis/pull/759)
- English
- [Estonian](https://github.com/TecharoHQ/anubis/pull/783)
- [Filipino](https://github.com/TecharoHQ/anubis/pull/775)
- [French](https://github.com/TecharoHQ/anubis/pull/716)
- [German](https://github.com/TecharoHQ/anubis/pull/741)
- [Icelandic](https://github.com/TecharoHQ/anubis/pull/780)
- [Italian](https://github.com/TecharoHQ/anubis/pull/778)
- [Japanese](https://github.com/TecharoHQ/anubis/pull/772)
- [Spanish](https://github.com/TecharoHQ/anubis/pull/716)
- [Turkish](https://github.com/TecharoHQ/anubis/pull/751)
If facts or local regulations demand, you can set Anubis default language with the `FORCED_LANGUAGE` environment variable or the `--forced-language` command line argument:
```sh
FORCED_LANGUAGE=de
```
#### Load average
Anubis can dynamically take action [based on the system load average](./admin/configuration/expressions.mdx#using-the-system-load-average), allowing you to write rules like this:
```yaml
## System load based checks.
# If the system is under high load for the last minute, add weight.
- name: high-load-average
action: WEIGH
expression: load_1m >= 10.0 # make sure to end the load comparison in a .0
weight:
adjust: 20
# If it is not for the last 15 minutes, remove weight.
- name: low-load-average
action: WEIGH
expression: load_15m <= 4.0 # make sure to end the load comparison in a .0
weight:
adjust: -10
```
Something to keep in mind about system load average is that it is not aware of the number of cores the system has. If you have a 16 core system that has 16 processes running but none of them is hogging the CPU, then you will get a load average below 16. If you are in doubt, make your "high load" metric at least two times the number of CPU cores and your "low load" metric at least half of the number of CPU cores. For example:
| Kind | Core count | Load threshold |
| --------: | :--------- | :------------- |
| high load | 4 | `8.0` |
| low load | 4 | `2.0` |
| high load | 16 | `32.0` |
| low load | 16 | `8` |
Also keep in mind that this does not account for other kinds of latency like I/O latency. A system can have its web applications unresponsive due to high latency from a MySQL server but still have that web application server report a load near or at zero.
### Other features and fixes
There are a bunch of other assorted features and fixes too:
- Add `COOKIE_SECURE` option to set the cookie [Secure flag](https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Cookies#block_access_to_your_cookies)
- Sets cookie defaults to use [SameSite: None](https://web.dev/articles/samesite-cookies-explained)
- Determine the `BIND_NETWORK`/`--bind-network` value from the bind address ([#677](https://github.com/TecharoHQ/anubis/issues/677)).
- Implement a [development container](https://containers.dev/) manifest to make contributions easier.
- Fix dynamic cookie domains functionality ([#731](https://github.com/TecharoHQ/anubis/pull/731))
- Add option for custom cookie prefix ([#732](https://github.com/TecharoHQ/anubis/pull/732))
- Make the [Open Graph](./admin/configuration/open-graph.mdx) subsystem and DNSBL subsystem use [storage backends](./admin/policies.mdx#storage-backends) instead of storing everything in memory by default.
- Allow [Common Crawl](https://commoncrawl.org/) by default so scrapers have less incentive to scrape
- The [bbolt storage backend](./admin/policies.mdx#bbolt) now runs its cleanup every hour instead of every five minutes.
- Don't block Anubis starting up if [Thoth](./admin/thoth.mdx) health checks fail.
- A race condition involving [opening two challenge pages at once in different tabs](https://github.com/TecharoHQ/anubis/issues/832) causing one of them to fail has been fixed.
- The "Try again" button on the error page has been fixed. Previously it meant "try the solution again" instead of "try the challenge again".
- In certain cases, a user could be stuck with a test cookie that is invalid, locking them out of the service for up to half an hour. This has been fixed with better validation of this case and clearing the cookie.
- Start exposing JA4H fingerprints for later use in CEL expressions.
- Add `/healthz` route for use in platform-based health checks.
### Potentially breaking changes
We try to introduce breaking changes as much as possible, but these are the changes that may be relevant for you as an administrator:
#### Challenge format change
Previously Anubis did no accounting for challenges that it issued. This means that if Anubis restarted during a client, the client would be able to proceed once Anubis came back online.
During the upgrade to v1.21.0 and when v1.21.0 (or later) restarts with the [in-memory storage backend](./admin/policies.mdx#memory), you may see a higher rate of failed challenges than normal. If this persists beyond a few minutes, [open an issue](https://github.com/TecharoHQ/anubis/issues/new).
If you are using the in-memory storage backend, please consider using [a different storage backend](./admin/policies.mdx#storage-backends).
#### Systemd service changes
The following potentially breaking change applies to native installs with systemd only:
Each instance of systemd service template now has a unique `RuntimeDirectory`, as opposed to each instance of the service sharing a `RuntimeDirectory`. This change was made to avoid [the `RuntimeDirectory` getting nuked any time one of the Anubis instances restarts](https://github.com/TecharoHQ/anubis/issues/748).
If you configured Anubis' unix sockets to listen on `/run/anubis/foo.sock` for instance `anubis@foo`, you will need to configure Anubis to listen on `/run/anubis/foo/foo.sock` and additionally configure your HTTP load balancer as appropriate.
If you need the legacy behaviour, install this [systemd unit dropin](https://www.flatcar.org/docs/latest/setup/systemd/drop-in-units/):
```systemd
# /etc/systemd/system/anubis@.service.d/50-runtimedir.conf
[Service]
RuntimeDirectory=anubis
```
Just keep in mind that this will cause problems when Anubis restarts.
## v1.20.0: Thancred Waters
The big ticket items are as follows:
@@ -36,6 +505,7 @@ A lot of performance improvements have been made:
And some cleanups/refactors were added:
- Fix OpenGraph passthrough ([#717](https://github.com/TecharoHQ/anubis/issues/717))
- Remove the unused `/test-error` endpoint and update the testing endpoint `/make-challenge` to only be enabled in
development
- Add `--xff-strip-private` flag/envvar to toggle skipping X-Forwarded-For private addresses or not
@@ -43,6 +513,8 @@ And some cleanups/refactors were added:
- Make progress bar styling more compatible (UXP, etc)
- Add `--strip-base-prefix` flag/envvar to strip the base prefix from request paths when forwarding to target servers
- Fix an off-by-one in the default threshold config
- Add functionality for HS512 JWT algorithm
- Add support for dynamic cookie domains with the `--cookie-dynamic-domain`/`COOKIE_DYNAMIC_DOMAIN` flag/envvar
Request weight is one of the biggest ticket features in Anubis. This enables Anubis to be much closer to a Web Application Firewall and when combined with custom thresholds allows administrators to have Anubis take advanced reactions. For more information about request weight, see [the request weight section](./admin/policies.mdx#request-weight) of the policy file documentation.

View File

@@ -1,12 +0,0 @@
---
title: Proof-of-Work Algorithm Selection
---
Anubis offers two proof-of-work algorithms:
- `"fast"`: highly optimized JavaScript that will run as fast as your computer lets it
- `"slow"`: intentionally slow JavaScript that will waste time and memory
The fast algorithm is used by default to limit impacts on users' computers. Administrators may configure individual bot policy rules to use the slow algorithm in order to make known malicious clients waitloop and do nothing useful.
Generally, you should use the fast algorithm unless you have a good reason not to.

View File

@@ -0,0 +1,341 @@
---
title: "Commercial support and an unbranded version"
---
If you want to use Anubis but organizational policies prevent you from using the branding that the open source project ships, we offer a commercial version of Anubis named BotStopper. BotStopper builds off of the open source core of Anubis and offers organizations more control over the branding, including but not limited to:
- Custom images for different states of the challenge process (in process, success, failure)
- Custom CSS and fonts
- Custom titles for the challenge and error pages
- "Anubis" replaced with "BotStopper" across the UI
- A private bug tracker for issues
In the near future this will expand to:
- A private challenge implementation that does advanced fingerprinting to check if the client is a genuine browser or not
- Advanced fingerprinting via [Thoth-based advanced checks](./thoth.mdx)
In order to sign up for BotStopper, please do one of the following:
- Sign up [on GitHub Sponsors](https://github.com/sponsors/Xe) at the $50 per month tier or higher
- Email [sales@techaro.lol](mailto:sales@techaro.lol) with your requirements for invoicing, please note that custom invoicing will cost more than using GitHub Sponsors for understandable overhead reasons
## Installation
Install BotStopper like you would Anubis, but replace the image reference. EG:
```diff
-ghcr.io/techarohq/anubis:latest
+ghcr.io/techarohq/botstopper/anubis:latest
```
### Binary packages
Binary packages are available [in the GitHub Releases page](https://github.com/TecharoHQ/botstopper/releases), the main difference is that the package name is `techaro-botstopper`, the systemd service is `techaro-botstopper@your-instance.service`, the binary is `/usr/bin/botstopper`, and the configuration is in `/etc/techaro-botstopper`. All other instructions in the [native package install guide](./native-install.mdx) apply.
### Docker / Podman
In order to pull the BotStopper image, you need to [authenticate with GitHub's Container Registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry).
```text
docker login ghcr.io -u your-username --password-stdin
```
Then you can use the image as normal.
### Kubernetes
If you are using Kubernetes, you will need to create an image pull secret:
```text
kubectl create secret docker-registry \
techarohq-botstopper \
--docker-server ghcr.io \
--docker-username your-username \
--docker-password your-access-token \
--docker-email your@email.address
```
Then attach it to your Deployment:
```diff
spec:
securityContext:
fsGroup: 1000
+ imagePullSecrets:
+ - name: techarohq-botstopper
```
## Configuration
### Docker compose
Follow [the upstream Docker compose directions](https://anubis.techaro.lol/docs/admin/environments/docker-compose) with the following additional options:
```diff
anubis:
image: ghcr.io/techarohq/botstopper/anubis:latest
environment:
BIND: ":8080"
DIFFICULTY: "4"
METRICS_BIND: ":9090"
SERVE_ROBOTS_TXT: "true"
TARGET: "http://nginx"
OG_PASSTHROUGH: "true"
OG_EXPIRY_TIME: "24h"
+ # botstopper config here
+ CHALLENGE_TITLE: "Doing math for your connnection!"
+ ERROR_TITLE: "Something went wrong!"
+ OVERLAY_FOLDER: /assets
+ volumes:
+ - "./your_folder:/assets"
```
#### Example
There is an example in [docker-compose.yaml](https://github.com/TecharoHQ/botstopper/blob/main/docker-compose.yaml). Start the example with `docker compose up`:
```text
docker compose up -d
```
And then open [https://botstopper.local.cetacean.club:8443](https://botstopper.local.cetacean.club:8443) in your browser.
> [!NOTE]
> This uses locally signed sacrificial TLS certificates stored in `./demo/pki`. Your browser will rightly reject these. Here is what the example looks like:
>
> ![](/img/botstopper/example-screenshot.webp)
## Custom images and CSS
Anubis uses an internal filesystem that contains CSS, JavaScript, and images. The BotStopper variant of Anubis lets you specify an overlay folder with the environment variable `OVERLAY_FOLDER`. The contents of this folder will be overlaid on top of Anubis' internal filesystem, allowing you to easily customize the images and CSS.
Your directory tree should look like this, assuming your data is in `./your_folder`:
```text
./your_folder
└── static
├── css
│ └── custom.css
└── img
├── happy.webp
├── pensive.webp
└── reject.webp
```
For an example directory tree using some off-the-shelf images the Tango icon set, see the [testdata](https://github.com/TecharoHQ/botstopper/tree/main/testdata/static/img) folder.
### Header-based overlay dispatch
If you run BotStopper in a multi-tenant environment where each tenant needs its own branding, BotStopper supports the ability to use request header values to direct asset reads to different folders under your `OVERLAY_FOLDER`. One of the most common ways to do this is based on the HTTP Host of the request. For example, if you set `ASSET_LOOKUP_HEADER=Host` in BotStopper's environment:
```text
$OVERLAY_FOLDER
├── static
│ ├── css
│ │ ├── custom.css
│ │ └── eyesore.css
│ └── img
│ ├── happy.webp
│ ├── pensive.webp
│ └── reject.webp
└── test.anubis.techaro.lol
└── static
├── css
│ └── custom.css
└── img
├── happy.webp
├── pensive.webp
└── reject.webp
```
Requests to `test.anubis.techaro.lol` will load assets in `$OVERLAY_FOLDER/test.anubis.techaro.lol/static` and all other requests will load them from `$OVERLAY_FOLDER/static`.
For an example, look at [the testdata folder in the BotStopper repo](https://github.com/TecharoHQ/botstopper/tree/main/testdata).
### Custom CSS
CSS customization is done mainly with CSS variables. View [the example custom CSS file](https://github.com/TecharoHQ/botstopper/blob/main/testdata/static/css/custom.css) for more information about what can be customized.
### Custom fonts
If you want to add custom fonts, copy the `woff2` files alongside your `custom.css` file and then include them with the [`@font-face` CSS at-rule](https://developer.mozilla.org/en-US/docs/Web/CSS/@font-face):
```css
@font-face {
font-family: "Oswald";
font-style: normal;
font-weight: 200 900;
font-display: swap;
src: url("./fonts/oswald.woff2") format("woff2");
}
```
Then adjust your CSS variables accordingly:
```css
:root {
--body-sans-font: Oswald, sans-serif;
--body-preformatted-font: monospace;
--body-title-font: serif;
}
```
To convert `.ttf` fonts to [Web-optimized woff2 fonts](https://www.w3.org/TR/WOFF2/), use the `woff2_compress` command from the `woff2` or `woff2-tools` package:
```console
$ woff2_compress oswald.ttf
Processing oswald.ttf => oswald.woff2
Compressed 159517 to 70469.
```
Then you can import and use it as normal.
### Customizing images
Anubis uses three images to visually communicate the state of the program. These are:
| Image name | Intended message | Example |
| :------------- | :----------------------------------------------- | :-------------------------------- |
| `happy.webp` | You have passed validation, all is good | ![](/img/botstopper/happy.webp) |
| `pensive.webp` | Checking is running, hold steady until it's done | ![](/img/botstopper/pensive.webp) |
| `reject.webp` | Something went wrong, this is a terminal state | ![](/img/botstopper/reject.webp) |
To make your own images at the optimal quality, use the following ffmpeg command:
```text
ffmpeg -i /path/to/image -vf scale=-1:384 happy.webp
```
`ffprobe` should report something like this on the generated images:
```text
Input #0, webp_pipe, from 'happy.webp':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: webp, none, 25 fps, 25 tbr, 25 tbn
```
In testing 384 by 384 pixels gives the best balance between filesize, quality, and clarity.
```text
$ du -hs *
4.0K happy.webp
12K pensive.webp
8.0K reject.webp
```
## Custom HTML templates
If you need to completely control the HTML layout of all Anubis pages, you can customize the entire page with `USE_TEMPLATES=true`. This uses Go's standard library [html/template](https://pkg.go.dev/html/template) package to template HTML responses. Your templates can contain whatever HTML you want. The only catch is that you MUST include `{{ .Head }}` in the `<head>` element for challenge pages, and you MUST include `{{ .Body }}` in the `<body>` element for all pages.
In order to use this, you must define the following templates:
| Template path | Usage |
| :----------------------------------------- | :---------------------------------------------- |
| `$OVERLAY_FOLDER/templates/challenge.tmpl` | Challenge pages |
| `$OVERLAY_FOLDER/templates/error.tmpl` | Error pages |
| `$OVERLAY_FOLDER/templates/impressum.tmpl` | [Impressum](./configuration/impressum.mdx) page |
:::note
Currently HTML templates don't work together with [Header-based overlay dispatch](#header-based-overlay-dispatch). This is a known issue that will be fixed soon. If you enable header-based overlay dispatch, BotStopper will use the global `templates` folder instead of using the templates present in the overlay.
:::
Here are minimal (but working) examples for each template:
<details>
<summary>`challenge.tmpl`</summary>
:::note
You **MUST** include the `{{.Head}}` segment in a `<head>` tag. It contains important information for challenges to execute. If you don't include this, no clients will be able to pass challenges.
:::
```html
<!DOCTYPE html>
<html lang="{{ .Lang }}">
<head>
{{ .Head }}
</head>
<body>
{{ .Body }}
</body>
</html>
```
</details>
<details>
<summary>`error.tmpl`</summary>
```html
<!DOCTYPE html>
<html lang="{{ .Lang }}">
<body>
{{ .Body }}
</body>
</html>
```
</details>
<details>
<summary>`impressum.tmpl`</summary>
```html
<!DOCTYPE html>
<html lang="{{ .Lang }}">
<body>
{{ .Body }}
</body>
</html>
```
</details>
### Template functions
In order to make life easier, the following template functions are defined:
#### `Asset`
Constructs the path for a static asset in the [overlay folder](#custom-images-and-css)'s `static` directory.
```go
func Asset(string) string
```
Usage:
```html
<link rel="stylesheet" href="{{ Asset "css/example.css" }}" />
```
Generates:
```html
<link
rel="stylesheet"
href="/.within.website/x/cmd/anubis/static/css/example.css"
/>
```
## Customizing messages
You can customize messages using the following environment variables:
| Message | Environment variable | Default |
| :------------------- | :------------------- | :----------------------------------------- |
| Challenge page title | `CHALLENGE_TITLE` | `Ensuring the security of your connection` |
| Error page title | `ERROR_TITLE` | `Error` |
For example:
```sh
# /etc/techaro-botstopper/gitea.env
CHALLENGE_TITLE="Wait a moment please!"
ERROR_TITLE="Client error"
```

View File

@@ -0,0 +1,27 @@
# Client IP Headers
Currently Anubis will always flatten the `X-Forwarded-For` when it contains multiple IP addresses. From right to left, the first IP address that is not in one of the following categories will be set as `X-Forwarded-For` in the request passed to the upstream.
- Private (`XFF_STRIP_PRIVATE`, enabled by default)
- CGNAT (always stripped)
- Link-local Unicast (always stripped)
```
Incoming: X-Forwarded-For: 1.2.3.4, 5.6.7.8, 10.0.0.1
Upstream: X-Forwarded-For: 5.6.7.8
```
This behavior will cause problems if the proxy in front of Anubis is from a public IP, such as Cloudflare, because Anubis will use the Cloudflare IP instead of your client's real IP. You will likely see all requests from your browser being blocked and/or an infinite challenge loop.
```
Incoming: X-Forwarded-For: REAL_CLIENT_IP, CF_IP
Upstream: X-Forwarded-For: CF_IP
```
As a workaround, you should configure your web server to parse an alternative source (such as `CF-Connecting-IP`), or pre-process the incoming `X-Forwarded-For` with your web server to ensure it only contains the real client IP address, then pass it to Anubis as `X-Forwarded-For`.
If you do not control the web server upstream of Anubis, the `custom-real-ip-header` command line flag accepts a header value that Anubis will read the real client IP address from. Anubis will set the `X-Real-IP` header to the IP address found in the custom header.
The `X-Real-IP` header will be automatically inferred from `X-Forwarded-For` if not set, setting it explicitly is not necessary as long as `X-Forwarded-For` contains only the real client IP. However setting it explicitly can eliminate spoofed values if your web server doesn't set this.
See [Cloudflare](environments/cloudflare.mdx) for an example configuration.

View File

@@ -1,8 +1,5 @@
{
"label": "Challenges",
"position": 10,
"link": {
"type": "generated-index",
"description": "The different challenge methods that Anubis supports."
}
"link": null
}

View File

@@ -0,0 +1,9 @@
# Challenge Methods
Anubis supports multiple challenge methods:
- [Meta Refresh](./metarefresh.mdx)
- [Preact](./preact.mdx)
- [Proof of Work](./proof-of-work.mdx)
Read the documentation to know which method is best for you.

View File

@@ -12,7 +12,6 @@ To use it in your Anubis configuration:
action: CHALLENGE
challenge:
difficulty: 1 # Number of seconds to wait before refreshing the page
report_as: 4 # Unused by this challenge method
algorithm: metarefresh # Specify a non-JS challenge method
```

View File

@@ -0,0 +1,18 @@
# Preact
The `preact` challenge sends the browser a simple challenge that makes it run very lightweight JavaScript that proves the client is able to execute client-side JavaScript. It uses [Preact](https://www.npmjs.com/package/preact) (a lightweight client side web framework in the vein of React) to do this.
To use it in your Anubis configuration:
```yaml
# Generic catchall rule
- name: generic-browser
user_agent_regex: >-
Mozilla|Opera
action: CHALLENGE
challenge:
difficulty: 1 # Number of seconds to wait before refreshing the page
algorithm: preact
```
This is the default challenge method for most clients.

View File

@@ -77,7 +77,7 @@ For example, consider this rule:
For this rule, if a request comes in from `8.8.8.8` or `1.1.1.1`, Anubis will deny the request and return an error page.
#### `all` blocks
### `all` blocks
An `all` block that contains a list of expressions. If all expressions in the list return `true`, then the action specified in the rule will be taken. If any of the expressions in the list returns `false`, Anubis will move on to the next rule.
@@ -99,15 +99,19 @@ For this rule, if a request comes in matching [the signature of the `go get` com
Anubis exposes the following variables to expressions:
| Name | Type | Explanation | Example |
| :-------------- | :-------------------- | :---------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------- |
| `headers` | `map[string, string]` | The [headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers) of the request being processed. | `{"User-Agent": "Mozilla/5.0 Gecko/20100101 Firefox/137.0"}` |
| `host` | `string` | The [HTTP hostname](https://web.dev/articles/url-parts#host) the request is targeted to. | `anubis.techaro.lol` |
| `method` | `string` | The [HTTP method](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Methods) in the request being processed. | `GET`, `POST`, `DELETE`, etc. |
| `path` | `string` | The [path](https://web.dev/articles/url-parts#pathname) of the request being processed. | `/`, `/api/memes/create` |
| `query` | `map[string, string]` | The [query parameters](https://web.dev/articles/url-parts#query) of the request being processed. | `?foo=bar` -> `{"foo": "bar"}` |
| `remoteAddress` | `string` | The IP address of the client. | `1.1.1.1` |
| `userAgent` | `string` | The [`User-Agent`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/User-Agent) string in the request being processed. | `Mozilla/5.0 Gecko/20100101 Firefox/137.0` |
| Name | Type | Explanation | Example |
| :-------------- | :-------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------- |
| `headers` | `map[string, string]` | The [headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers) of the request being processed. | `{"User-Agent": "Mozilla/5.0 Gecko/20100101 Firefox/137.0"}` |
| `host` | `string` | The [HTTP hostname](https://web.dev/articles/url-parts#host) the request is targeted to. | `anubis.techaro.lol` |
| `contentLength` | `int64` | The numerical value of the `Content-Length` header. |
| `load_1m` | `double` | The current system load average over the last one minute. This is useful for making [load-based checks](#using-the-system-load-average). |
| `load_5m` | `double` | The current system load average over the last five minutes. This is useful for making [load-based checks](#using-the-system-load-average). |
| `load_15m` | `double` | The current system load average over the last fifteen minutes. This is useful for making [load-based checks](#using-the-system-load-average). |
| `method` | `string` | The [HTTP method](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Methods) in the request being processed. | `GET`, `POST`, `DELETE`, etc. |
| `path` | `string` | The [path](https://web.dev/articles/url-parts#pathname) of the request being processed. | `/`, `/api/memes/create` |
| `query` | `map[string, string]` | The [query parameters](https://web.dev/articles/url-parts#query) of the request being processed. | `?foo=bar` -> `{"foo": "bar"}` |
| `remoteAddress` | `string` | The IP address of the client. | `1.1.1.1` |
| `userAgent` | `string` | The [`User-Agent`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/User-Agent) string in the request being processed. | `Mozilla/5.0 Gecko/20100101 Firefox/137.0` |
Of note: in many languages when you look up a key in a map and there is nothing there, the language will return some "falsy" value like `undefined` in JavaScript, `None` in Python, or the zero value of the type in Go. In CEL, if you try to look up a value that does not exist, execution of the expression will fail and Anubis will return an error.
@@ -120,7 +124,7 @@ In order to avoid this, make sure the header or query parameter you are testing
- 'path == "/index.php"'
- '"title" in query'
- '"action" in query'
- 'query["action"] == "history"
- 'query["action"] == "history"'
```
This rule throws a challenge if and only if all of the following conditions are true:
@@ -141,12 +145,74 @@ X-Real-Ip: 8.8.8.8
Anubis would return a challenge because all of those conditions are true.
### Using the system load average
In Unix-like systems (such as Linux), every process on the system has to wait its turn to be able to run. This means that as more processes on the system are running, they need to wait longer to be able to execute. The [load average](<https://en.wikipedia.org/wiki/Load_(computing)>) represents the number of processes that want to be able to run but can't run yet. This metric isn't the most reliable to identify a cause, but is great at helping to identify symptoms.
Anubis lets you use the system load average as an input to expressions so that you can make dynamic rules like "when the system is under a low amount of load, dial back the protection, but when it's under a lot of load, crank it up to the mix". This lets you get all of the blocking features of Anubis in the background but only really expose Anubis to users when the system is actively being attacked.
This is best combined with the [weight](../policies.mdx#request-weight) and [threshold](./thresholds.mdx) systems so that you can have Anubis dynamically respond to attacks. Consider these rules in the default configuration file:
```yaml
## System load based checks.
# If the system is under high load for the last minute, add weight.
- name: high-load-average
action: WEIGH
expression: load_1m >= 10.0 # make sure to end the load comparison in a .0
weight:
adjust: 20
# If it is not for the last 15 minutes, remove weight.
- name: low-load-average
action: WEIGH
expression: load_15m <= 4.0 # make sure to end the load comparison in a .0
weight:
adjust: -10
```
This combination of rules makes Anubis dynamically react to the system load and only kick in when the system is under attack.
Something to keep in mind about system load average is that it is not aware of the number of cores the system has. If you have a 16 core system that has 16 processes running but none of them is hogging the CPU, then you will get a load average below 16. If you are in doubt, make your "high load" metric at least two times the number of CPU cores and your "low load" metric at least half of the number of CPU cores. For example:
| Kind | Core count | Load threshold |
| --------: | :--------- | :------------- |
| high load | 4 | `8.0` |
| low load | 4 | `2.0` |
| high load | 16 | `32.0` |
| low load | 16 | `8` |
Also keep in mind that this does not account for other kinds of latency like I/O latency. A system can have its web applications unresponsive due to high latency from a MySQL server but still have that web application server report a load near or at zero.
## Functions exposed to Anubis expressions
Anubis expressions can be augmented with the following functions:
### `missingHeader`
Available in `bot` expressions.
```ts
function missingHeader(headers: Record<string, string>, key: string) bool
```
`missingHeader` returns `true` if the request does not contain a header. This is useful when you are trying to assert behavior such as:
```yaml
# Adds weight to old versions of Chrome
- name: old-chrome
action: WEIGH
weight:
adjust: 10
expression:
all:
- userAgent.matches("Chrome/[1-9][0-9]?\\.0\\.0\\.0")
- missingHeader(headers, "Sec-Ch-Ua")
```
### `randInt`
Available in all expressions.
```ts
function randInt(n: int): int;
```
@@ -167,6 +233,153 @@ This is best applied when doing explicit block rules, eg:
It seems counter-intuitive to allow known bad clients through sometimes, but this allows you to confuse attackers by making Anubis' behavior random. Adjust the thresholds and numbers as facts and circumstances demand.
### `regexSafe`
Available in `bot` expressions.
```ts
function regexSafe(input: string): string;
```
`regexSafe` takes a string and escapes it for safe use inside of a regular expression. This is useful when you are creating regular expressions from headers or variables such as `remoteAddress`.
| Input | Output |
| :------------------------ | :------------------------------ |
| `regexSafe("1.2.3.4")` | `1\\.2\\.3\\.4` |
| `regexSafe("techaro.lol")` | `techaro\\.lol` |
| `regexSafe("star*")` | `star\\*` |
| `regexSafe("plus+")` | `plus\\+` |
| `regexSafe("{braces}")` | `\\{braces\\}` |
| `regexSafe("start^")` | `start\\^` |
| `regexSafe("back\\slash")` | `back\\\\slash` |
| `regexSafe("dash-dash")` | `dash\\-dash` |
### `segments`
Available in `bot` expressions.
```ts
function segments(path: string): string[];
```
`segments` returns the number of slash-separated path segments, ignoring the leading slash. Here is what it will return with some common paths:
| Input | Output |
| :----------------------- | :--------------------- |
| `segments("/")` | `[""]` |
| `segments("/foo/bar")` | `["foo", "bar"] ` |
| `segments("/users/xe/")` | `["users", "xe", ""] ` |
:::note
If the path ends with a `/`, then the last element of the result will be an empty string. This is because `/users/xe` and `/users/xe/` are semantically different paths.
:::
This is useful if you want to write rules that allow requests that have no query parameters only if they have less than two path segments:
```yaml
- name: two-path-segments-no-query
action: ALLOW
expression:
all:
- size(query) == 0
- size(segments(path)) < 2
```
### DNS Functions
Anubis can also perform DNS lookups as a part of its expression evaluation. This can be useful for doing things like checking for a valid [Forward-confirmed reverse DNS (FCrDNS)](https://en.wikipedia.org/wiki/Forward-confirmed_reverse_DNS) record.
#### `arpaReverseIP`
Available in `bot` expressions.
```ts
function arpaReverseIP(ip: string): string;
```
`arpaReverseIP` takes an IP address and returns its value in [ARPA notation](https://www.ietf.org/rfc/rfc2317.html). This can be useful when matching PTR record patterns.
| Input | Output |
| :----------------------------- | :------------------------------------------------------------------- |
| `arpaReverseIP("1.2.3.4")` | `4.3.2.1` |
| `arpaReverseIP("2001:db8::1")` | `1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2` |
#### `lookupHost`
Available in `bot` expressions.
```ts
function lookupHost(host: string): string[];
```
`lookupHost` performs a DNS lookup for the given hostname and returns a list of IP addresses.
```yaml
- name: cloudflare-ip-in-host-header
action: DENY
expression: '"104.16.0.0" in lookupHost(headers["Host"])'
```
#### `reverseDNS`
Available in `bot` expressions.
```ts
function reverseDNS(ip: string): string[];
```
`reverseDNS` takes an IP address and returns the DNS names associated with it. This is useful when you want to check PTR records of an IP address.
```yaml
- name: allow-googlebot
action: ALLOW
expression: 'reverseDNS(remoteAddress).endsWith(".googlebot.com")'
```
::: warning
Do not use this for validating the legitimacy of an IP address. It is possible for DNS records to be out of date or otherwise manipulated. Use [`verifyFCrDNS`](#verifyfcrdns) instead for a more reliable result.
:::
#### `verifyFCrDNS`
Available in `bot` expressions.
```ts
function verifyFCrDNS(ip: string): bool;
function verifyFCrDNS(ip: string, pattern: string): bool;
```
`verifyFCrDNS` checks if the reverse DNS of an IP address matches its forward DNS. This is a common technique to filter out spam and bot traffic. `verifyFCrDNS` comes in two forms:
- `verifyFCrDNS(remoteAddress)` will check that the reverse DNS of the remote address resolves back to the remote address. If no PTR records, returns true.
- `verifyFCrDNS(remoteAddress, pattern)` will check that the reverse DNS of the remote address is matching with pattern and that name resolves back to the remote address.
This is best used in rules like this:
```yaml
- name: require-fcrdns-for-post
action: DENY
expression:
all:
- method == "POST"
- "!verifyFCrDNS(remoteAddress)"
```
Here is an another example that allows requests from telegram:
```yaml
- name: telegrambot
action: ALLOW
expression:
all:
- userAgent.matches("TelegramBot")
- verifyFCrDNS(remoteAddress, "ptr\\.telegram\\.org$")
```
## Life advice
Expressions are very powerful. This is a benefit and a burden. If you are not careful with your expression targeting, you will be liable to get yourself into trouble. If you are at all in doubt, throw a `CHALLENGE` over a `DENY`. Legitimate users can easily work around a `CHALLENGE` result with a [proof of work challenge](../../design/why-proof-of-work.mdx). Bots are less likely to be able to do this.

View File

@@ -7,25 +7,6 @@ Anubis has the ability to let you import snippets of configuration into the main
EG:
<Tabs>
<TabItem value="json" label="JSON">
```json
{
"bots": [
{
"import": "(data)/bots/ai-catchall.yaml"
},
{
"import": "(data)/bots/cloudflare-workers.yaml"
}
]
}
```
</TabItem>
<TabItem value="yaml" label="YAML" default>
```yaml
bots:
# Pathological bots to deny
@@ -34,30 +15,8 @@ bots:
- import: (data)/bots/cloudflare-workers.yaml
```
</TabItem>
</Tabs>
Of note, a bot rule can either have inline bot configuration or import a bot config snippet. You cannot do both in a single bot rule.
<Tabs>
<TabItem value="json" label="JSON">
```json
{
"bots": [
{
"import": "(data)/bots/ai-catchall.yaml",
"name": "generic-browser",
"user_agent_regex": "Mozilla|Opera\n",
"action": "CHALLENGE"
}
]
}
```
</TabItem>
<TabItem value="yaml" label="YAML" default>
```yaml
bots:
- import: (data)/bots/ai-catchall.yaml
@@ -67,9 +26,6 @@ bots:
action: CHALLENGE
```
</TabItem>
</Tabs>
This will return an error like this:
```text
@@ -83,30 +39,11 @@ Paths can either be prefixed with `(data)` to import from the [the data folder i
You can also import from an imported file in case you want to import an entire folder of rules at once.
<Tabs>
<TabItem value="json" label="JSON">
```json
{
"bots": [
{
"import": "(data)/bots/_deny-pathological.yaml"
}
]
}
```
</TabItem>
<TabItem value="yaml" label="YAML" default>
```yaml
bots:
- import: (data)/bots/_deny-pathological.yaml
```
</TabItem>
</Tabs>
This lets you import an entire ruleset at once:
```yaml
@@ -124,22 +61,6 @@ Snippets can be written in either JSON or YAML, with a preference for YAML. When
Here is an example snippet that allows [IPv6 Unique Local Addresses](https://en.wikipedia.org/wiki/Unique_local_address) through Anubis:
<Tabs>
<TabItem value="json" label="JSON">
```json
[
{
"name": "ipv6-ula",
"action": "ALLOW",
"remote_addresses": ["fc00::/7"]
}
]
```
</TabItem>
<TabItem value="yaml" label="YAML" default>
```yaml
- name: ipv6-ula
action: ALLOW
@@ -147,9 +68,6 @@ Here is an example snippet that allows [IPv6 Unique Local Addresses](https://en.
- fc00::/7
```
</TabItem>
</Tabs>
## Extracting Anubis' embedded filesystem
You can always extract the list of rules embedded into the Anubis binary with this command:

View File

@@ -32,7 +32,7 @@ sequenceDiagram
participant Validation
participant Evil Site
Hacker->>+User: Click on yoursite.com with this solution
Hacker->>+User: Click on example.org with this solution
User->>+Validation: Here's a solution, send me to evilsite.com
Validation->>+User: Here's a cookie, go to evilsite.com
User->>+Evil Site: GET evilsite.com
@@ -46,11 +46,14 @@ Redirect domain not allowed
## Configuring allowed redirect domains
By default, Anubis will limit redirects to be on the same HTTP Host that Anubis is running on (EG: requests to yoursite.com cannot redirect outside of yoursite.com). If you need to set more than one domain, fill the `REDIRECT_DOMAINS` environment variable with a comma-separated list of domain names that Anubis should allow redirects to.
By default, Anubis may redirect to any domain which could cause security issues in the unlikely case that an attacker passes a challenge for your browser and then tricks you into clicking a link to your domain.
One can restrict the domains that Anubis can redirect to when passing a challenge by setting up `REDIRECT_DOMAINS` environment variable.
If you need to set more than one domain, fill the environment variable with a comma-separated list of domain names.
There is also glob matching support. You can pass `*.bugs.techaro.lol` to allow redirecting to anything ending with `.bugs.techaro.lol`. There is a limit of 4 wildcards.
:::note
These domains are _an exact string match_, they do not support wildcard matches.
If you are hosting Anubis on a non-standard port (`https://example:com:8443`, `http://www.example.net:8080`, etc.), you must also include the port number here.
:::
@@ -60,7 +63,7 @@ These domains are _an exact string match_, they do not support wildcard matches.
```shell
# anubis.env
REDIRECT_DOMAINS="yoursite.com,secretplans.yoursite.com"
REDIRECT_DOMAINS="example.org,secretplans.example.org,*.test.example.org"
# ...
```
@@ -72,7 +75,7 @@ services:
anubis-nginx:
image: ghcr.io/techarohq/anubis:latest
environment:
REDIRECT_DOMAINS: "yoursite.com,secretplans.yoursite.com"
REDIRECT_DOMAINS: "example.org,secretplans.example.org,*.test.example.org"
# ...
```
@@ -86,7 +89,7 @@ Inside your Deployment, StatefulSet, or Pod:
image: ghcr.io/techarohq/anubis:latest
env:
- name: REDIRECT_DOMAINS
value: "yoursite.com,secretplans.yoursite.com"
value: "example.org,secretplans.example.org,*.test.example.org"
# ...
```

View File

@@ -156,3 +156,68 @@ server {
```
</details>
## Caddy
Anubis can be used with the [`forward_auth`](https://caddyserver.com/docs/caddyfile/directives/forward_auth) directive in Caddy.
First, the `TARGET` environment variable in Anubis must be set to a space, eg:
<Tabs>
<TabItem value="env-file" label="Environment file" default>
```shell
# anubis.env
TARGET=" "
# ...
```
</TabItem>
<TabItem value="docker-compose" label="Docker Compose">
```yaml
services:
anubis-caddy:
image: ghcr.io/techarohq/anubis:latest
environment:
TARGET: " "
# ...
```
</TabItem>
<TabItem value="k8s" label="Kubernetes">
Inside your Deployment, StatefulSet, or Pod:
```yaml
- name: anubis
image: ghcr.io/techarohq/anubis:latest
env:
- name: TARGET
value: " "
# ...
```
</TabItem>
</Tabs>
Then configure the necessary directives in your site block:
```caddy
route {
# Assumption: Anubis is running in the same network namespace as
# caddy on localhost TCP port 8923
reverse_proxy /.within.website/* 127.0.0.1:8923
forward_auth 127.0.0.1:8923 {
uri /.within.website/x/cmd/anubis/api/check
trusted_proxies private_ranges
@unauthorized status 401
handle_response @unauthorized {
redir * /.within.website/?redir={uri} 307
}
}
}
```
If you want to use this for multiple sites, you can create a [snippet](https://caddyserver.com/docs/caddyfile/concepts#snippets) and import it in multiple site blocks.

View File

@@ -41,7 +41,6 @@ thresholds:
challenge:
algorithm: metarefresh
difficulty: 1
report_as: 1
- name: moderate-suspicion
expression:
@@ -52,7 +51,6 @@ thresholds:
challenge:
algorithm: fast
difficulty: 2
report_as: 2
- name: extreme-suspicion
expression: weight >= 20
@@ -60,7 +58,6 @@ thresholds:
challenge:
algorithm: fast
difficulty: 4
report_as: 4
```
This defines a suite of 4 thresholds:
@@ -130,7 +127,6 @@ action: CHALLENGE
challenge:
algorithm: metarefresh
difficulty: 1
report_as: 1
```
</td>

View File

@@ -30,31 +30,10 @@ Effectively you have one trip through Apache to do TLS termination, a detour thr
:::note
These examples assume that you are using a setup where your nginx configuration is made up of a bunch of files in `/etc/httpd/conf.d/*.conf`. This is not true for all deployments of Apache. If you are not in such an environment, append these snippets to your `/etc/httpd/conf/httpd.conf` file.
These examples assume that you are using a setup where your Apache configuration is made up of a bunch of files in `/etc/httpd/conf.d/*.conf`. This is not true for all deployments of Apache. If you are not in such an environment, append these snippets to your `/etc/httpd/conf/httpd.conf` file.
:::
## Dependencies
Install the following dependencies for proxying HTTP:
<Tabs>
<TabItem value="rpm" label="Red Hat / RPM" default>
```text
dnf -y install mod_proxy_html
```
</TabItem>
<TabItem value="deb" label="Debian / Ubuntu / apt">
```text
apt-get install -y libapache2-mod-proxy-html libxml2-dev
```
</TabItem>
</Tabs>
## Configuration
Assuming you are protecting `anubistest.techaro.lol`, you need the following server configuration blocks:
@@ -77,6 +56,7 @@ Assuming you are protecting `anubistest.techaro.lol`, you need the following ser
</VirtualHost>
# HTTPS listener that forwards to Anubis
<IfModule mod_proxy.c>
<VirtualHost *:443>
ServerAdmin your@email.here
ServerName anubistest.techaro.lol
@@ -112,6 +92,11 @@ Assuming you are protecting `anubistest.techaro.lol`, you need the following ser
DocumentRoot /var/www/anubistest.techaro.lol
ErrorLog /var/log/httpd/anubistest.techaro.lol_error.log
CustomLog /var/log/httpd/anubistest.techaro.lol_access.log combined
# Pass the remote IP to the proxied application instead of 127.0.0.1
# This requires mod_remoteip
RemoteIPHeader X-Real-IP
RemoteIPTrustedProxy 127.0.0.1/32
</VirtualHost>
```

Some files were not shown because too many files have changed in this diff Show More