Compare commits

...

14 Commits

Author SHA1 Message Date
Jason Cameron
33ea6c714f fix: replace checker.NewMapIterator with newMapIterator for HTTPHeaders and URLValues
Signed-off-by: Jason Cameron <jason.cameron@stanwith.me>
2026-02-18 12:47:15 -05:00
Jason Cameron
8e9b641280 fix: implement map iterators for HTTPHeaders and URLValues to resolve CEL internal errors
Signed-off-by: Jason Cameron <jason.cameron@stanwith.me>
2026-02-18 12:46:26 -05:00
Jason Cameron
d21c67f902 test: add unit tests for CELChecker map iteration
Signed-off-by: Jason Cameron <jason.cameron@stanwith.me>
2026-02-18 12:46:26 -05:00
Jason Cameron
19e82973af fix: enable CEL iterators
Signed-off-by: Jason Cameron <jason.cameron@stanwith.me>
2026-02-18 12:46:26 -05:00
Xe Iaso
35b5e78a0d chore: tag v1.25.0
Signed-off-by: Xe Iaso <me@xeiaso.net>
2026-02-18 15:56:28 +00:00
Martin
4e0df8c643 feat(docs): Add HAProxy Configurations to Docs (#1424)
* Add HAProxy docs

* Add changes to Changelog

* Add CodeBlock import to haproxy.mdc

* Fix typos

* Add exceptions to spelling
2026-02-15 10:32:32 -05:00
dependabot[bot]
c34ec67777 build(deps): bump the npm group across 1 directory with 2 updates (#1452)
Bumps the npm group with 2 updates in the / directory: [preact](https://github.com/preactjs/preact) and [esbuild](https://github.com/evanw/esbuild).


Updates `preact` from 10.28.2 to 10.28.3
- [Release notes](https://github.com/preactjs/preact/releases)
- [Commits](https://github.com/preactjs/preact/compare/10.28.2...10.28.3)

Updates `esbuild` from 0.27.2 to 0.27.3
- [Release notes](https://github.com/evanw/esbuild/releases)
- [Changelog](https://github.com/evanw/esbuild/blob/main/CHANGELOG.md)
- [Commits](https://github.com/evanw/esbuild/compare/v0.27.2...v0.27.3)

---
updated-dependencies:
- dependency-name: preact
  dependency-version: 10.28.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: npm
- dependency-name: esbuild
  dependency-version: 0.27.3
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: npm
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-15 10:32:07 -05:00
dependabot[bot]
61026976ec build(deps): bump the github-actions group across 1 directory with 6 updates (#1453)
Bumps the github-actions group with 6 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| [docker/login-action](https://github.com/docker/login-action) | `3.6.0` | `3.7.0` |
| [actions/attest-build-provenance](https://github.com/actions/attest-build-provenance) | `3.1.0` | `3.2.0` |
| [actions-hub/kubectl](https://github.com/actions-hub/kubectl) | `1.35.0` | `1.35.1` |
| [actions/cache](https://github.com/actions/cache) | `5.0.2` | `5.0.3` |
| [amannn/action-semantic-pull-request](https://github.com/amannn/action-semantic-pull-request) | `5.5.3` | `6.1.1` |
| [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) | `7.2.0` | `7.3.0` |



Updates `docker/login-action` from 3.6.0 to 3.7.0
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](5e57cd1181...c94ce9fb46)

Updates `actions/attest-build-provenance` from 3.1.0 to 3.2.0
- [Release notes](https://github.com/actions/attest-build-provenance/releases)
- [Changelog](https://github.com/actions/attest-build-provenance/blob/main/RELEASE.md)
- [Commits](00014ed6ed...96278af6ca)

Updates `actions-hub/kubectl` from 1.35.0 to 1.35.1
- [Release notes](https://github.com/actions-hub/kubectl/releases)
- [Commits](f6d776bd78...3ece3793e7)

Updates `actions/cache` from 5.0.2 to 5.0.3
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](8b402f58fb...cdf6c1fa76)

Updates `amannn/action-semantic-pull-request` from 5.5.3 to 6.1.1
- [Release notes](https://github.com/amannn/action-semantic-pull-request/releases)
- [Changelog](https://github.com/amannn/action-semantic-pull-request/blob/main/CHANGELOG.md)
- [Commits](0723387faa...48f256284b)

Updates `astral-sh/setup-uv` from 7.2.0 to 7.3.0
- [Release notes](https://github.com/astral-sh/setup-uv/releases)
- [Commits](61cb8a9741...eac588ad8d)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 3.7.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions
- dependency-name: actions/attest-build-provenance
  dependency-version: 3.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions
- dependency-name: actions-hub/kubectl
  dependency-version: 1.35.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
- dependency-name: actions/cache
  dependency-version: 5.0.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
- dependency-name: amannn/action-semantic-pull-request
  dependency-version: 6.1.1
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: astral-sh/setup-uv
  dependency-version: 7.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-15 10:31:51 -05:00
Xe Iaso
189c5c021c chore: sync logo submissions (#1455)
* chore: sync logo submissions

Closes: #1447
Closes: #1438
Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: update spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2026-02-15 15:29:32 +00:00
Martin
dde186150b feat(docs): Add ANEXIA Sponsor logo (#1409)
* Add ANEXIA Sponsor logo

* Add changes to CHANGELOG.md

* Add missing words to spelling expect.txt

---------

Signed-off-by: Xe Iaso <xe.iaso@techaro.lol>
Co-authored-by: Xe Iaso <xe.iaso@techaro.lol>
2026-02-15 15:21:44 +00:00
Xe Iaso
a98f721957 docs: add AI coding tools policy (#1454)
* docs: add AI coding tools policy

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: remove symlinks

Signed-off-by: Xe Iaso <me@xeiaso.net>

* docs(AGENTS): make compatible with opencode

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: update spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2026-02-15 15:08:59 +00:00
hyperdefined
03f5e0d542 feat(apps): add updown.io policy (#1444) 2026-02-15 08:21:39 -05:00
Kurt McKee
b4f15a5d16 Fix a CI warning: "The set-output command is deprecated" (#1443) 2026-02-15 08:19:43 -05:00
Xe Iaso
bf5d66222c chore: set up commitlint, husky, and prettier (#1451)
* chore: add prettier configuration

Signed-off-by: Xe Iaso <me@xeiaso.net>

* format: run prettier tree-wide

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(prettier): ignore intentionally ungrammatical files

Signed-off-by: Xe Iaso <me@xeiaso.net>

* ci: add PR title lint rule

Signed-off-by: Xe Iaso <me@xeiaso.net>

* ci: add DCO check

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: add commitlint and husky

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: add CONTRIBUTING guidelines

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: set SKIP_INTEGRATION in precommit tests

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: update spelling

Signed-off-by: Xe Iaso <me@xeiaso.net>

* ci(dco): remove reopened trigger

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore: remove dead file

Signed-off-by: Xe Iaso <me@xeiaso.net>

* chore(prettier): don't format nginx includes

Signed-off-by: Xe Iaso <me@xeiaso.net>

---------

Signed-off-by: Xe Iaso <me@xeiaso.net>
2026-02-15 08:19:12 -05:00
191 changed files with 3523 additions and 1754 deletions

View File

@@ -2,9 +2,7 @@
// README at: https://github.com/devcontainers/templates/tree/main/src/debian // README at: https://github.com/devcontainers/templates/tree/main/src/debian
{ {
"name": "Dev", "name": "Dev",
"dockerComposeFile": [ "dockerComposeFile": ["./docker-compose.yaml"],
"./docker-compose.yaml"
],
"service": "workspace", "service": "workspace",
"workspaceFolder": "/workspace/anubis", "workspaceFolder": "/workspace/anubis",
"postStartCommand": "bash ./.devcontainer/poststart.sh", "postStartCommand": "bash ./.devcontainer/poststart.sh",

View File

@@ -58,4 +58,3 @@ body:
attributes: attributes:
label: Additional context label: Additional context
description: Add any other context about the problem here. description: Add any other context about the problem here.

View File

@@ -1,6 +1,6 @@
name: Feature request name: Feature request
description: Suggest an idea for this project description: Suggest an idea for this project
title: '[Feature request] ' title: "[Feature request] "
body: body:
- type: textarea - type: textarea

View File

@@ -1,17 +1,17 @@
# check-spelling/check-spelling configuration # check-spelling/check-spelling configuration
File | Purpose | Format | Info | File | Purpose | Format | Info |
-|-|-|- | -------------------------------------------------- | -------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
[dictionary.txt](dictionary.txt) | Replacement dictionary (creating this file will override the default dictionary) | one word per line | [dictionary](https://github.com/check-spelling/check-spelling/wiki/Configuration#dictionary) | [dictionary.txt](dictionary.txt) | Replacement dictionary (creating this file will override the default dictionary) | one word per line | [dictionary](https://github.com/check-spelling/check-spelling/wiki/Configuration#dictionary) |
[allow.txt](allow.txt) | Add words to the dictionary | one word per line (only letters and `'`s allowed) | [allow](https://github.com/check-spelling/check-spelling/wiki/Configuration#allow) | [allow.txt](allow.txt) | Add words to the dictionary | one word per line (only letters and `'`s allowed) | [allow](https://github.com/check-spelling/check-spelling/wiki/Configuration#allow) |
[reject.txt](reject.txt) | Remove words from the dictionary (after allow) | grep pattern matching whole dictionary words | [reject](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-reject) | [reject.txt](reject.txt) | Remove words from the dictionary (after allow) | grep pattern matching whole dictionary words | [reject](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-reject) |
[excludes.txt](excludes.txt) | Files to ignore entirely | perl regular expression | [excludes](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-excludes) | [excludes.txt](excludes.txt) | Files to ignore entirely | perl regular expression | [excludes](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-excludes) |
[only.txt](only.txt) | Only check matching files (applied after excludes) | perl regular expression | [only](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-only) | [only.txt](only.txt) | Only check matching files (applied after excludes) | perl regular expression | [only](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-only) |
[patterns.txt](patterns.txt) | Patterns to ignore from checked lines | perl regular expression (order matters, first match wins) | [patterns](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-patterns) | [patterns.txt](patterns.txt) | Patterns to ignore from checked lines | perl regular expression (order matters, first match wins) | [patterns](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-patterns) |
[candidate.patterns](candidate.patterns) | Patterns that might be worth adding to [patterns.txt](patterns.txt) | perl regular expression with optional comment block introductions (all matches will be suggested) | [candidates](https://github.com/check-spelling/check-spelling/wiki/Feature:-Suggest-patterns) | [candidate.patterns](candidate.patterns) | Patterns that might be worth adding to [patterns.txt](patterns.txt) | perl regular expression with optional comment block introductions (all matches will be suggested) | [candidates](https://github.com/check-spelling/check-spelling/wiki/Feature:-Suggest-patterns) |
[line_forbidden.patterns](line_forbidden.patterns) | Patterns to flag in checked lines | perl regular expression (order matters, first match wins) | [patterns](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-patterns) | [line_forbidden.patterns](line_forbidden.patterns) | Patterns to flag in checked lines | perl regular expression (order matters, first match wins) | [patterns](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-patterns) |
[expect.txt](expect.txt) | Expected words that aren't in the dictionary | one word per line (sorted, alphabetically) | [expect](https://github.com/check-spelling/check-spelling/wiki/Configuration#expect) | [expect.txt](expect.txt) | Expected words that aren't in the dictionary | one word per line (sorted, alphabetically) | [expect](https://github.com/check-spelling/check-spelling/wiki/Configuration#expect) |
[advice.md](advice.md) | Supplement for GitHub comment when unrecognized words are found | GitHub Markdown | [advice](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-advice) | [advice.md](advice.md) | Supplement for GitHub comment when unrecognized words are found | GitHub Markdown | [advice](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-advice) |
Note: you can replace any of these files with a directory by the same name (minus the suffix) Note: you can replace any of these files with a directory by the same name (minus the suffix)
and then include multiple files inside that directory (with that suffix) to merge multiple files together. and then include multiple files inside that directory (with that suffix) to merge multiple files together.

View File

@@ -2,30 +2,27 @@
<details><summary>If the flagged items are :exploding_head: false positives</summary> <details><summary>If the flagged items are :exploding_head: false positives</summary>
If items relate to a ... If items relate to a ...
* binary file (or some other file you wouldn't want to check at all).
- binary file (or some other file you wouldn't want to check at all).
Please add a file path to the `excludes.txt` file matching the containing file. Please add a file path to the `excludes.txt` file matching the containing file.
File paths are Perl 5 Regular Expressions - you can [test]( File paths are Perl 5 Regular Expressions - you can [test](https://www.regexplanet.com/advanced/perl/) yours before committing to verify it will match your files.
https://www.regexplanet.com/advanced/perl/) yours before committing to verify it will match your files.
`^` refers to the file's path from the root of the repository, so `^README\.md$` would exclude [README.md]( `^` refers to the file's path from the root of the repository, so `^README\.md$` would exclude [README.md](../tree/HEAD/README.md) (on whichever branch you're using).
../tree/HEAD/README.md) (on whichever branch you're using).
* well-formed pattern. - well-formed pattern.
If you can write a [pattern]( If you can write a [pattern](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples:-patterns) that would match it,
https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples:-patterns
) that would match it,
try adding it to the `patterns.txt` file. try adding it to the `patterns.txt` file.
Patterns are Perl 5 Regular Expressions - you can [test]( Patterns are Perl 5 Regular Expressions - you can [test](https://www.regexplanet.com/advanced/perl/) yours before committing to verify it will match your lines.
https://www.regexplanet.com/advanced/perl/) yours before committing to verify it will match your lines.
Note that patterns can't match multiline strings. Note that patterns can't match multiline strings.
</details> </details>
<!-- adoption information--> <!-- adoption information-->
:steam_locomotive: If you're seeing this message and your PR is from a branch that doesn't have check-spelling, :steam_locomotive: If you're seeing this message and your PR is from a branch that doesn't have check-spelling,
please merge to your PR's base branch to get the version configured for your repository. please merge to your PR's base branch to get the version configured for your repository.

View File

@@ -24,3 +24,5 @@ iplist
NArg NArg
blocklists blocklists
rififi rififi
prolocation
Prolocation

View File

@@ -2,10 +2,12 @@ acs
Actorified Actorified
actorifiedstore actorifiedstore
actorify actorify
agentic
Aibrew Aibrew
alibaba alibaba
alrest alrest
amazonbot amazonbot
anexia
anthro anthro
anubis anubis
anubistest anubistest
@@ -61,7 +63,9 @@ checkresult
chibi chibi
cidranger cidranger
ckie ckie
CLAUDE
cloudflare cloudflare
cloudsolutions
Codespaces Codespaces
confd confd
containerbuild containerbuild
@@ -74,6 +78,7 @@ Cscript
daemonizing daemonizing
databento databento
dayjob dayjob
dco
DDOS DDOS
Debian Debian
debrpm debrpm
@@ -114,6 +119,7 @@ FCr
fcrdns fcrdns
fediverse fediverse
ffprobe ffprobe
fhdr
financials financials
finfos finfos
Firecrawl Firecrawl
@@ -134,6 +140,7 @@ gipc
gitea gitea
GLM GLM
godotenv godotenv
goimports
goland goland
gomod gomod
goodbot goodbot
@@ -150,6 +157,7 @@ grw
gzw gzw
Hashcash Hashcash
hashrate hashrate
hdr
headermap headermap
healthcheck healthcheck
healthz healthz
@@ -159,6 +167,7 @@ Hetzner
hmc hmc
homelab homelab
hostable hostable
HSTS
htmlc htmlc
htmx htmx
httpdebug httpdebug
@@ -325,6 +334,8 @@ stackoverflow
startprecmd startprecmd
stoppostcmd stoppostcmd
storetest storetest
srcip
strcmp
subgrid subgrid
subr subr
subrequest subrequest
@@ -349,12 +360,14 @@ Timpibot
TLog TLog
traefik traefik
trunc trunc
txn
uberspace uberspace
Unbreak Unbreak
unbreakdocker unbreakdocker
unifiedjs unifiedjs
unmarshal unmarshal
unparseable unparseable
updown
uvx uvx
UXP UXP
valkey valkey
@@ -367,6 +380,7 @@ VKE
vnd vnd
VPS VPS
Vultr Vultr
WAIFU
weblate weblate
webmaster webmaster
webpage webpage
@@ -404,3 +418,4 @@ Zenos
zizmor zizmor
zombocom zombocom
zos zos
zst

View File

@@ -24,10 +24,10 @@ jobs:
- uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0 - uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0
with: with:
node-version: '24.11.0' node-version: "24.11.0"
- uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0 - uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
with: with:
go-version: '1.25.4' go-version: "1.25.4"
- name: install node deps - name: install node deps
run: | run: |

9
.github/workflows/dco-check.yaml vendored Normal file
View File

@@ -0,0 +1,9 @@
name: DCO Check
on: [pull_request]
jobs:
dco_check:
runs-on: ubuntu-latest
steps:
- uses: tisonkun/actions-dco@f1024cd563550b5632e754df11b7d30b73be54a5 # v1.1

View File

@@ -28,10 +28,10 @@ jobs:
- uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0 - uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0
with: with:
node-version: '24.11.0' node-version: "24.11.0"
- uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0 - uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
with: with:
go-version: '1.25.4' go-version: "1.25.4"
- uses: ko-build/setup-ko@d006021bd0c28d1ce33a07e7943d48b079944c8d # v0.9 - uses: ko-build/setup-ko@d006021bd0c28d1ce33a07e7943d48b079944c8d # v0.9

View File

@@ -38,15 +38,15 @@ jobs:
- uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0 - uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0
with: with:
node-version: '24.11.0' node-version: "24.11.0"
- uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0 - uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
with: with:
go-version: '1.25.4' go-version: "1.25.4"
- uses: ko-build/setup-ko@d006021bd0c28d1ce33a07e7943d48b079944c8d # v0.9 - uses: ko-build/setup-ko@d006021bd0c28d1ce33a07e7943d48b079944c8d # v0.9
- name: Log into registry - name: Log into registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0 uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.repository_owner }} username: ${{ github.repository_owner }}
@@ -68,7 +68,7 @@ jobs:
SLOG_LEVEL: debug SLOG_LEVEL: debug
- name: Generate artifact attestation - name: Generate artifact attestation
uses: actions/attest-build-provenance@00014ed6ed5efc5b1ab7f7f34a39eb55d41aa4f8 # v3.1.0 uses: actions/attest-build-provenance@96278af6caaf10aea03fd8d33a09a777ca52d62f # v3.2.0
with: with:
subject-name: ${{ env.IMAGE }} subject-name: ${{ env.IMAGE }}
subject-digest: ${{ steps.build.outputs.digest }} subject-digest: ${{ steps.build.outputs.digest }}

View File

@@ -25,7 +25,7 @@ jobs:
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0 uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
- name: Log into registry - name: Log into registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0 uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
with: with:
registry: ghcr.io registry: ghcr.io
username: techarohq username: techarohq
@@ -53,14 +53,14 @@ jobs:
push: true push: true
- name: Apply k8s manifests to limsa lominsa - name: Apply k8s manifests to limsa lominsa
uses: actions-hub/kubectl@f6d776bd78f4523e36d6c74d34f9941c242b2213 # v1.35.0 uses: actions-hub/kubectl@3ece3793e7a9fe94effe257d03ac834c815ea87d # v1.35.1
env: env:
KUBE_CONFIG: ${{ secrets.LIMSA_LOMINSA_KUBECONFIG }} KUBE_CONFIG: ${{ secrets.LIMSA_LOMINSA_KUBECONFIG }}
with: with:
args: apply -k docs/manifest args: apply -k docs/manifest
- name: Apply k8s manifests to limsa lominsa - name: Apply k8s manifests to limsa lominsa
uses: actions-hub/kubectl@f6d776bd78f4523e36d6c74d34f9941c242b2213 # v1.35.0 uses: actions-hub/kubectl@3ece3793e7a9fe94effe257d03ac834c815ea87d # v1.35.1
env: env:
KUBE_CONFIG: ${{ secrets.LIMSA_LOMINSA_KUBECONFIG }} KUBE_CONFIG: ${{ secrets.LIMSA_LOMINSA_KUBECONFIG }}
with: with:

View File

@@ -19,7 +19,7 @@ jobs:
- uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0 - uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
with: with:
go-version: '1.25.4' go-version: "1.25.4"
- name: Check go.mod and go.sum in main directory - name: Check go.mod and go.sum in main directory
run: | run: |

View File

@@ -26,13 +26,13 @@ jobs:
- uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0 - uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0
with: with:
node-version: '24.11.0' node-version: "24.11.0"
- uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0 - uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
with: with:
go-version: '1.25.4' go-version: "1.25.4"
- name: Cache playwright binaries - name: Cache playwright binaries
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2 uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
id: playwright-cache id: playwright-cache
with: with:
path: | path: |

19
.github/workflows/lint-pr-title.yaml vendored Normal file
View File

@@ -0,0 +1,19 @@
name: "Lint PR"
on:
pull_request_target:
types:
- opened
- edited
- synchronize
jobs:
lint_pr_title:
name: Validate PR title
runs-on: ubuntu-latest
permissions:
pull-requests: read
steps:
- uses: amannn/action-semantic-pull-request@48f256284bd46cdaab1048c3721360e808335d50 # v6.1.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -27,10 +27,10 @@ jobs:
- uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0 - uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0
with: with:
node-version: '24.11.0' node-version: "24.11.0"
- uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0 - uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
with: with:
go-version: '1.25.4' go-version: "1.25.4"
- name: install node deps - name: install node deps
run: | run: |

View File

@@ -28,10 +28,10 @@ jobs:
- uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0 - uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0
with: with:
node-version: '24.11.0' node-version: "24.11.0"
- uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0 - uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
with: with:
go-version: '1.25.4' go-version: "1.25.4"
- name: install node deps - name: install node deps
run: | run: |

View File

@@ -59,16 +59,16 @@ name: Check Spelling
on: on:
push: push:
branches: branches:
- '**' - "**"
tags-ignore: tags-ignore:
- '**' - "**"
pull_request: pull_request:
branches: branches:
- '**' - "**"
types: types:
- 'opened' - "opened"
- 'reopened' - "reopened"
- 'synchronize' - "synchronize"
jobs: jobs:
spelling: spelling:

View File

@@ -24,7 +24,7 @@ jobs:
fetch-depth: 0 fetch-depth: 0
persist-credentials: false persist-credentials: false
- name: Log into registry - name: Log into registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0 uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.repository_owner }} username: ${{ github.repository_owner }}

View File

@@ -37,7 +37,7 @@ jobs:
- uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0 - uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
with: with:
go-version: '1.25.4' go-version: "1.25.4"
- name: Run CI - name: Run CI
run: go run ./utils/cmd/backoff-retry bash test/ssh-ci/rigging.sh ${{ matrix.host }} run: go run ./utils/cmd/backoff-retry bash test/ssh-ci/rigging.sh ${{ matrix.host }}

View File

@@ -3,10 +3,10 @@ name: zizmor
on: on:
push: push:
paths: paths:
- '.github/workflows/*.ya?ml' - ".github/workflows/*.ya?ml"
pull_request: pull_request:
paths: paths:
- '.github/workflows/*.ya?ml' - ".github/workflows/*.ya?ml"
jobs: jobs:
zizmor: zizmor:
@@ -21,7 +21,7 @@ jobs:
persist-credentials: false persist-credentials: false
- name: Install the latest version of uv - name: Install the latest version of uv
uses: astral-sh/setup-uv@61cb8a9741eeb8a550a1b8544337180c0fc8476b # v7.2.0 uses: astral-sh/setup-uv@eac588ad8def6316056a12d4907a9d4d84ff7a3b # v7.3.0
- name: Run zizmor 🌈 - name: Run zizmor 🌈
run: uvx zizmor --format sarif . > results.sarif run: uvx zizmor --format sarif . > results.sarif

8
.husky/commit-msg Normal file
View File

@@ -0,0 +1,8 @@
npx --no-install commitlint --edit "$1"
# Check if commit message contains Signed-off-by line
if ! grep -q "^Signed-off-by:" "$1"; then
echo "Commit message must contain a 'Signed-off-by:' line."
echo "Please use 'git commit --signoff' or add a Signed-off-by line to your commit message."
exit 1
fi

2
.husky/pre-commit Normal file
View File

@@ -0,0 +1,2 @@
npm run lint
npm run test

4
.prettierignore Normal file
View File

@@ -0,0 +1,4 @@
lib/config/testdata/bad/*
*.inc
AGENTS.md
CLAUDE.md

75
AGENTS.md Normal file
View File

@@ -0,0 +1,75 @@
# Agent instructions
Primary agent documentation is in `CONTRIBUTING.md`. You MUST read this file before proceeding.
## Useful Commands
```shell
npm ci # install node dependencies
npm run assets # build JS/CSS (required before any Go build/test)
npm run build # assets + go build -> ./var/anubis
npm run dev # assets + run locally with --use-remote-address
```
## Testing
```shell
npm run test
```
## Linting
```shell
go vet ./...
go tool staticcheck ./...
go tool govulncheck ./...
```
## Commit Messages
Commit messages follow the [**Conventional Commits**](https://www.conventionalcommits.org/en/v1.0.0/) format:
```text
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
```
**Types**: `feat`, `fix`, `docs`, `style`, `refactor`, `perf`, `test`, `build`, `ci`, `chore`, `revert`
- Add `!` after type/scope for breaking changes or include `BREAKING CHANGE:` in the footer.
- Keep descriptions concise, imperative, lowercase, and without a trailing period.
- Reference issues/PRs in the footer when applicable.
- **ALL git commits MUST be made with `--signoff`.** This is mandatory.
### Attribution Requirements
AI agents must disclose what tool and model they are using in the "Assisted-by" commit footer:
```text
Assisted-by: [Model Name] via [Tool Name]
```
Example:
```text
Assisted-by: GLM 4.6 via Claude Code
```
## PR Checklist
- Add description of changes to `[Unreleased]` in `docs/docs/CHANGELOG.md`.
- Add test cases for bug fixes and behavior changes.
- Run integration tests: `npm run test:integration`.
- All commits must have verified (signed) signatures.
## Key Conventions
- **Security-first**: This is security software. Code reviews are strict. Always add tests for bug fixes. Consider adversarial inputs.
- **Configuration**: YAML-based policy files. Config structs validate via `Valid() error` methods returning sentinel errors.
- **Store interface**: `lib/store.Interface` abstracts key-value storage.
- **Environment variables**: Parsed from flags via `flagenv`. Use `.env` files locally (loaded by `godotenv/autoload`). Never commit `.env` files.
- **Assets must be built first**: JS/CSS assets are embedded into the Go binary. Always run `npm run assets` before `go test` or `go build`.
- **CEL expressions**: Policy rules support CEL (Common Expression Language) expressions for advanced matching. See `lib/policy/expressions/`.

2
CLAUDE.md Normal file
View File

@@ -0,0 +1,2 @@
@AGENTS.md
@CONTRIBUTING.md

144
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,144 @@
# Contributing to Anubis
Anubis is a Web AI Firewall Utility (WAIFU) written in Go. It uses sha256 proof-of-work challenges to protect upstream HTTP resources from scraper bots. This is security software -- correctness matters.
## Build & Run
Prerequisites: Go 1.24+, Node.js (any supported version), esbuild, gzip, zstd, brotli. Install all with `brew bundle` if you are using Homebrew.
```shell
npm ci # install node dependencies
npm run assets # build JS/CSS (required before any Go build/test)
npm run build # assets + go build -> ./var/anubis
npm run dev # assets + run locally with --use-remote-address
```
## Testing
```shell
# Run all unit tests (assets must be built first)
npm run test # or: make test
# Run a single test by name
go test -run TestClampIP ./internal/
# Run a single test file's package
go test ./lib/config/
# Run tests with verbose output
go test -v -run TestBotValid ./lib/config/
```
### Smoke tests
The `tests` folder contains "smoke tests" that are intended to set up Anubis in production-adjacent settings and testing it against real infrastructure tools. A smoke test is a folder with `test.sh` that sets up infrastructure, validates the behaviour, and then tears it down. Smoke tests are run in GitHub actions with `.github/workflows/smoke-tests.yaml`.
## Linting
```shell
go vet ./...
go tool staticcheck ./...
go tool govulncheck ./...
```
## Code Generation
The project uses `go generate` for templ templates and stringer. Always run `npm run generate` (or `make assets`) before building or testing. Generated files include:
- `web/*.templ` -> templ-generated Go code
- `web/static/` -> bundled/minified JS and CSS (with .gz, .zst, .br variants)
## Project Layout
Important folders:
- `cmd/anubis`: Main entrypoint for the project. This is the program that runs on servers.
- `lib/*`: The core library for Anubis and all of its features. This is internal code that is made public for ease of downstream consumption. No API stability is guaranteed. Use at your own risk.
- `internal/*`: Actual internal code that is private to the implementation of Anubis. If you need to use a package in this, please copy it out and manually vendor it in your own project.
- `test/*` Smoke tests (see dedicated section for details).
- `web`: Frontend HTML templates.
- `xess`: Frontend CSS framework and build logic.
## Code Style
### Go
This project follows the idioms of the Go standard library. Generally follow the patterns that upstream Go uses, including:
- Prefer packages from the standard library unless there is no other option.
- Use package import aliases only when package names collide.
- Use `goimports` to format code. Run with `npm run format`.
- Use sentinel errors as package-level variables prefixed with `Err` (such as `ErrBotMustHaveName`). Wrap with `fmt.Errorf("package: small message giving context: %w", err)`.
- Use `log/slog` for structured logging. Pass loggers as arguments to functions. Use `lg.With` to preload with context. Prefer using `slog.Debug` unless you absolutely need to report messages to users, some users have magical thinking about log verbosity.
- Name PublicFunctionsAndTypes in PascalCase. Name privateFunctionsAndTypes in camelCase.
- Acronyms stay uppercase (`URL`, `HTTP`, `IP`, `DNS`, etc.)
- Enumerations should use strong types with validation logic for parsing remote input.
- Be conservative in what you send but liberal in what you accept.
- Anything reading configuration values should use both `json` and `yaml` struct tags. Use pointer values for optional configuration values.
- Use [table-driven tests](https://go.dev/wiki/TableDrivenTests) when writing test code.
- Use [`t.Helper()`](https://pkg.go.dev/testing#T.Helper) in helper code (setup/teardown scaffolding).
- Use [`t.Cleanup()`](https://pkg.go.dev/testing#T.Cleanup) to tear down per-test or per-suite scaffolding.
- Use [`errors.Is`](https://pkg.go.dev/errors#Is) for validating function results against sentinel errors.
- Prefer same-package tests over black-box tests (`_test` packages).
### JavaScript / TypeScript
- Source lives in `web/js/`. Built with esbuild, bundled and minified.
- Uses Preact (not React).
- No linter config. Keep functions small. Use `const` by default.
### Templ Templates
Anubis uses [Templ](https://templ.guide) for generating HTML on the server.
- `.templ` files in `web/` generate Go code. Run `go generate ./...` (or `npm run assets`) after modifying them.
- Templates receive typed Go parameters. Keep logic in Go, not templates.
## Commit Messages
Commit messages follow the [**Conventional Commits**](https://www.conventionalcommits.org/en/v1.0.0/) format:
```text
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
```
**Types**: `feat`, `fix`, `docs`, `style`, `refactor`, `perf`, `test`, `build`, `ci`, `chore`, `revert`
- Add `!` after type/scope for breaking changes or include `BREAKING CHANGE:` in the footer.
- Keep descriptions concise, imperative, lowercase, and without a trailing period.
- Reference issues/PRs in the footer when applicable.
- **ALL git commits MUST be made with `--signoff`.** This is mandatory.
### Attribution Requirements
AI agents must disclose what tool and model they are using in the "Assisted-by" commit footer:
```text
Assisted-by: [Model Name] via [Tool Name]
```
Example:
```text
Assisted-by: GLM 4.6 via Claude Code
```
## PR Checklist
- Add description of changes to `[Unreleased]` in `docs/docs/CHANGELOG.md`.
- Add test cases for bug fixes and behavior changes.
- Run integration tests: `npm run test:integration`.
- All commits must have verified (signed) signatures.
## Key Conventions
- **Security-first**: This is security software. Code reviews are strict. Always add tests for bug fixes. Consider adversarial inputs.
- **Configuration**: YAML-based policy files. Config structs validate via `Valid() error` methods returning sentinel errors.
- **Store interface**: `lib/store.Interface` abstracts key-value storage.
- **Environment variables**: Parsed from flags via `flagenv`. Use `.env` files locally (loaded by `godotenv/autoload`). Never commit `.env` files.
- **Assets must be built first**: JS/CSS assets are embedded into the Go binary. Always run `npm run assets` before `go test` or `go build`.
- **CEL expressions**: Policy rules support CEL (Common Expression Language) expressions for advanced matching. See `lib/policy/expressions/`.

View File

@@ -29,6 +29,12 @@ Anubis is brought to you by sponsors and donors like:
<a href="https://distrust.co?utm_campaign=github&utm_medium=referral&utm_content=anubis"> <a href="https://distrust.co?utm_campaign=github&utm_medium=referral&utm_content=anubis">
<img src="./docs/static/img/sponsors/distrust-logo.webp" alt="Distrust" height="64"> <img src="./docs/static/img/sponsors/distrust-logo.webp" alt="Distrust" height="64">
</a> </a>
<a href="https://about.gitea.com?utm_campaign=github&utm_medium=referral&utm_content=anubis">
<img src="./docs/static/img/sponsors/gitea-logo.webp" alt="Gitea" height="64">
</a>
<a href="https://prolocation.net?utm_campaign=github&utm_medium=referral&utm_content=anubis">
<img src="./docs/static/img/sponsors/prolocation-logo.svg" alt="Prolocation" height="64">
</a>
<a href="https://terminaltrove.com/?utm_campaign=github&utm_medium=referral&utm_content=anubis&utm_source=abgh"> <a href="https://terminaltrove.com/?utm_campaign=github&utm_medium=referral&utm_content=anubis&utm_source=abgh">
<img src="./docs/static/img/sponsors/terminal-trove.webp" alt="Terminal Trove" height="64"> <img src="./docs/static/img/sponsors/terminal-trove.webp" alt="Terminal Trove" height="64">
</a> </a>
@@ -58,6 +64,9 @@ Anubis is brought to you by sponsors and donors like:
height="64" height="64"
/> />
</a> </a>
<a href="https://www.anexia.com/">
<img src="./docs/static/img/sponsors/anexia-cloudsolutions-logo.webp" alt="ANEXIA Cloud Solutions" height="64">
</a>
## Overview ## Overview

View File

@@ -1 +1 @@
1.24.0 1.25.0

View File

@@ -159,5 +159,8 @@ func run(command string) (string, error) {
} }
func setOutput(key, val string) { func setOutput(key, val string) {
fmt.Printf("::set-output name=%s::%s\n", key, val) github_output := os.Getenv("GITHUB_OUTPUT")
f, _ := os.OpenFile(github_output, os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0644)
fmt.Fprintf(f, "%s=%s\n", key, val)
f.Close()
} }

View File

@@ -4,7 +4,4 @@
user_agent_regex: MistralAI-User/.+; \+https\://docs\.mistral\.ai/robots user_agent_regex: MistralAI-User/.+; \+https\://docs\.mistral\.ai/robots
action: ALLOW action: ALLOW
# https://mistral.ai/mistralai-user-ips.json # https://mistral.ai/mistralai-user-ips.json
remote_addresses: [ remote_addresses: ["20.240.160.161/32", "20.240.160.1/32"]
"20.240.160.161/32",
"20.240.160.1/32",
]

View File

@@ -5,7 +5,8 @@
action: ALLOW action: ALLOW
# https://openai.com/chatgpt-user.json # https://openai.com/chatgpt-user.json
# curl 'https://openai.com/chatgpt-user.json' | jq '.prefixes.[].ipv4Prefix' | sed 's/$/,/' # curl 'https://openai.com/chatgpt-user.json' | jq '.prefixes.[].ipv4Prefix' | sed 's/$/,/'
remote_addresses: [ remote_addresses:
[
"13.65.138.112/28", "13.65.138.112/28",
"23.98.179.16/28", "23.98.179.16/28",
"13.65.138.96/28", "13.65.138.96/28",

View File

@@ -4,9 +4,5 @@
user_agent_regex: Perplexity-User/.+; \+https\://perplexity\.ai/perplexity-user user_agent_regex: Perplexity-User/.+; \+https\://perplexity\.ai/perplexity-user
action: ALLOW action: ALLOW
# https://www.perplexity.com/perplexity-user.json # https://www.perplexity.com/perplexity-user.json
remote_addresses: [ remote_addresses:
"44.208.221.197/32", ["44.208.221.197/32", "34.193.163.52/32", "18.97.21.0/30", "18.97.43.80/29"]
"34.193.163.52/32",
"18.97.21.0/30",
"18.97.43.80/29",
]

View File

@@ -4,7 +4,8 @@
user_agent_regex: Applebot user_agent_regex: Applebot
action: ALLOW action: ALLOW
# https://search.developer.apple.com/applebot.json # https://search.developer.apple.com/applebot.json
remote_addresses: [ remote_addresses:
[
"17.241.208.160/27", "17.241.208.160/27",
"17.241.193.160/27", "17.241.193.160/27",
"17.241.200.160/27", "17.241.200.160/27",

View File

@@ -2,7 +2,8 @@
user_agent_regex: \+http\://www\.bing\.com/bingbot\.htm user_agent_regex: \+http\://www\.bing\.com/bingbot\.htm
action: ALLOW action: ALLOW
# https://www.bing.com/toolbox/bingbot.json # https://www.bing.com/toolbox/bingbot.json
remote_addresses: [ remote_addresses:
[
"157.55.39.0/24", "157.55.39.0/24",
"207.46.13.0/24", "207.46.13.0/24",
"40.77.167.0/24", "40.77.167.0/24",
@@ -30,5 +31,5 @@
"20.74.197.0/28", "20.74.197.0/28",
"20.15.133.160/27", "20.15.133.160/27",
"40.77.177.0/24", "40.77.177.0/24",
"40.77.178.0/23" "40.77.178.0/23",
] ]

View File

@@ -2,7 +2,8 @@
user_agent_regex: DuckDuckBot/1\.1; \(\+http\://duckduckgo\.com/duckduckbot\.html\) user_agent_regex: DuckDuckBot/1\.1; \(\+http\://duckduckgo\.com/duckduckbot\.html\)
action: ALLOW action: ALLOW
# https://duckduckgo.com/duckduckgo-help-pages/results/duckduckbot # https://duckduckgo.com/duckduckgo-help-pages/results/duckduckbot
remote_addresses: [ remote_addresses:
[
"57.152.72.128/32", "57.152.72.128/32",
"51.8.253.152/32", "51.8.253.152/32",
"40.80.242.63/32", "40.80.242.63/32",
@@ -271,5 +272,5 @@
"4.213.46.14/32", "4.213.46.14/32",
"172.169.17.165/32", "172.169.17.165/32",
"51.8.71.117/32", "51.8.71.117/32",
"20.3.1.178/32" "20.3.1.178/32",
] ]

View File

@@ -2,7 +2,8 @@
user_agent_regex: \+http\://www\.google\.com/bot\.html user_agent_regex: \+http\://www\.google\.com/bot\.html
action: ALLOW action: ALLOW
# https://developers.google.com/static/search/apis/ipranges/googlebot.json # https://developers.google.com/static/search/apis/ipranges/googlebot.json
remote_addresses: [ remote_addresses:
[
"2001:4860:4801:10::/64", "2001:4860:4801:10::/64",
"2001:4860:4801:11::/64", "2001:4860:4801:11::/64",
"2001:4860:4801:12::/64", "2001:4860:4801:12::/64",
@@ -259,5 +260,5 @@
"66.249.79.224/27", "66.249.79.224/27",
"66.249.79.32/27", "66.249.79.32/27",
"66.249.79.64/27", "66.249.79.64/27",
"66.249.79.96/27" "66.249.79.96/27",
] ]

View File

@@ -1,8 +1,4 @@
- name: internet-archive - name: internet-archive
action: ALLOW action: ALLOW
# https://ipinfo.io/AS7941 # https://ipinfo.io/AS7941
remote_addresses: [ remote_addresses: ["207.241.224.0/20", "208.70.24.0/21", "2620:0:9c0::/48"]
"207.241.224.0/20",
"208.70.24.0/21",
"2620:0:9c0::/48"
]

View File

@@ -2,9 +2,10 @@
user_agent_regex: \+https\://kagi\.com/bot user_agent_regex: \+https\://kagi\.com/bot
action: ALLOW action: ALLOW
# https://kagi.com/bot # https://kagi.com/bot
remote_addresses: [ remote_addresses:
[
"216.18.205.234/32", "216.18.205.234/32",
"35.212.27.76/32", "35.212.27.76/32",
"104.254.65.50/32", "104.254.65.50/32",
"209.151.156.194/32" "209.151.156.194/32",
] ]

View File

@@ -2,10 +2,11 @@
user_agent_regex: search\.marginalia\.nu user_agent_regex: search\.marginalia\.nu
action: ALLOW action: ALLOW
# Received directly over email # Received directly over email
remote_addresses: [ remote_addresses:
[
"193.183.0.162/31", "193.183.0.162/31",
"193.183.0.164/30", "193.183.0.164/30",
"193.183.0.168/30", "193.183.0.168/30",
"193.183.0.172/31", "193.183.0.172/31",
"193.183.0.174/32" "193.183.0.174/32",
] ]

View File

@@ -4,7 +4,8 @@
user_agent_regex: GPTBot/1\.1; \+https\://openai\.com/gptbot user_agent_regex: GPTBot/1\.1; \+https\://openai\.com/gptbot
action: ALLOW action: ALLOW
# https://openai.com/gptbot.json # https://openai.com/gptbot.json
remote_addresses: [ remote_addresses:
[
"52.230.152.0/24", "52.230.152.0/24",
"20.171.206.0/24", "20.171.206.0/24",
"20.171.207.0/24", "20.171.207.0/24",

View File

@@ -4,10 +4,11 @@
user_agent_regex: OAI-SearchBot/1\.0; \+https\://openai\.com/searchbot user_agent_regex: OAI-SearchBot/1\.0; \+https\://openai\.com/searchbot
action: ALLOW action: ALLOW
# https://openai.com/searchbot.json # https://openai.com/searchbot.json
remote_addresses: [ remote_addresses:
[
"20.42.10.176/28", "20.42.10.176/28",
"172.203.190.128/28", "172.203.190.128/28",
"104.210.140.128/28", "104.210.140.128/28",
"51.8.102.0/24", "51.8.102.0/24",
"135.234.64.0/24" "135.234.64.0/24",
] ]

View File

@@ -4,7 +4,8 @@
user_agent_regex: PerplexityBot/.+; \+https\://perplexity\.ai/perplexitybot user_agent_regex: PerplexityBot/.+; \+https\://perplexity\.ai/perplexitybot
action: ALLOW action: ALLOW
# https://www.perplexity.com/perplexitybot.json # https://www.perplexity.com/perplexitybot.json
remote_addresses: [ remote_addresses:
[
"107.20.236.150/32", "107.20.236.150/32",
"3.224.62.45/32", "3.224.62.45/32",
"18.210.92.235/32", "18.210.92.235/32",

26
data/services/updown.yaml Normal file
View File

@@ -0,0 +1,26 @@
# https://updown.io/about
- name: updown
user_agent_regex: updown.io
action: ALLOW
remote_addresses: [
"45.32.74.41/32",
"104.238.136.194/32",
"192.99.37.47/32",
"91.121.222.175/32",
"104.238.159.87/32",
"102.212.60.78/32",
"135.181.102.135/32",
"45.32.107.181/32",
"45.76.104.117/32",
"45.63.29.207/32",
"2001:19f0:6001:2c6::1/128",
"2001:19f0:9002:11a::1/128",
"2607:5300:60:4c2f::1/128",
"2001:41d0:2:85af::1/128",
"2001:19f0:6c01:145::1/128",
"2c0f:c40:4003:4::2/128",
"2a01:4f9:c010:d5f9::1/128",
"2001:19f0:4400:402e::1/128",
"2001:19f0:7001:45a::1/128",
"2001:19f0:5801:1d8::1/128"
]

View File

@@ -2,7 +2,8 @@
user_agent_regex: UptimeRobot user_agent_regex: UptimeRobot
action: ALLOW action: ALLOW
# https://api.uptimerobot.com/meta/ips # https://api.uptimerobot.com/meta/ips
remote_addresses: [ remote_addresses:
[
"3.12.251.153/32", "3.12.251.153/32",
"3.20.63.178/32", "3.20.63.178/32",
"3.77.67.4/32", "3.77.67.4/32",

View File

@@ -1,14 +1,16 @@
import React, { useState, useEffect, useMemo } from 'react'; import React, { useState, useEffect, useMemo } from "react";
import styles from './styles.module.css'; import styles from "./styles.module.css";
// A helper function to perform SHA-256 hashing. // A helper function to perform SHA-256 hashing.
// It takes a string, encodes it, hashes it, and returns a hex string. // It takes a string, encodes it, hashes it, and returns a hex string.
async function sha256(message) { async function sha256(message) {
try { try {
const msgBuffer = new TextEncoder().encode(message); const msgBuffer = new TextEncoder().encode(message);
const hashBuffer = await crypto.subtle.digest('SHA-256', msgBuffer); const hashBuffer = await crypto.subtle.digest("SHA-256", msgBuffer);
const hashArray = Array.from(new Uint8Array(hashBuffer)); const hashArray = Array.from(new Uint8Array(hashBuffer));
const hashHex = hashArray.map(b => b.toString(16).padStart(2, '0')).join(''); const hashHex = hashArray
.map((b) => b.toString(16).padStart(2, "0"))
.join("");
return hashHex; return hashHex;
} catch (error) { } catch (error) {
console.error("Hashing failed:", error); console.error("Hashing failed:", error);
@@ -21,21 +23,42 @@ const generateRandomHex = (bytes = 16) => {
const buffer = new Uint8Array(bytes); const buffer = new Uint8Array(bytes);
crypto.getRandomValues(buffer); crypto.getRandomValues(buffer);
return Array.from(buffer) return Array.from(buffer)
.map(byte => byte.toString(16).padStart(2, '0')) .map((byte) => byte.toString(16).padStart(2, "0"))
.join(''); .join("");
}; };
// Icon components for better visual feedback // Icon components for better visual feedback
const CheckIcon = () => ( const CheckIcon = () => (
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconGreen} fill="none" viewBox="0 0 24 24" stroke="currentColor"> <svg
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M9 12l2 2 4-4m6 2a9 9 0 11-18 0 9 9 0 0118 0z" /> xmlns="http://www.w3.org/2000/svg"
className={styles.iconGreen}
fill="none"
viewBox="0 0 24 24"
stroke="currentColor"
>
<path
strokeLinecap="round"
strokeLinejoin="round"
strokeWidth={2}
d="M9 12l2 2 4-4m6 2a9 9 0 11-18 0 9 9 0 0118 0z"
/>
</svg> </svg>
); );
const XCircleIcon = () => ( const XCircleIcon = () => (
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconRed} fill="none" viewBox="0 0 24 24" stroke="currentColor"> <svg
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M10 14l2-2m0 0l2-2m-2 2l-2-2m2 2l2 2m7-2a9 9 0 11-18 0 9 9 0 0118 0z" /> xmlns="http://www.w3.org/2000/svg"
className={styles.iconRed}
fill="none"
viewBox="0 0 24 24"
stroke="currentColor"
>
<path
strokeLinecap="round"
strokeLinejoin="round"
strokeWidth={2}
d="M10 14l2-2m0 0l2-2m-2 2l-2-2m2 2l2 2m7-2a9 9 0 11-18 0 9 9 0 0118 0z"
/>
</svg> </svg>
); );
@@ -46,7 +69,7 @@ export default function App() {
// State for the nonce, which is the variable we can change // State for the nonce, which is the variable we can change
const [nonce, setNonce] = useState(0); const [nonce, setNonce] = useState(0);
// State to store the resulting hash // State to store the resulting hash
const [hash, setHash] = useState(''); const [hash, setHash] = useState("");
// A flag to indicate if the current hash is the "winning" one // A flag to indicate if the current hash is the "winning" one
const [isMining, setIsMining] = useState(false); const [isMining, setIsMining] = useState(false);
const [isFound, setIsFound] = useState(false); const [isFound, setIsFound] = useState(false);
@@ -55,7 +78,10 @@ export default function App() {
const difficulty = "00"; const difficulty = "00";
// Memoize the combined data to avoid recalculating on every render // Memoize the combined data to avoid recalculating on every render
const combinedData = useMemo(() => `${challenge}${nonce}`, [challenge, nonce]); const combinedData = useMemo(
() => `${challenge}${nonce}`,
[challenge, nonce],
);
// This effect hook recalculates the hash whenever the combinedData changes. // This effect hook recalculates the hash whenever the combinedData changes.
useEffect(() => { useEffect(() => {
@@ -68,7 +94,9 @@ export default function App() {
} }
}; };
calculateHash(); calculateHash();
return () => { isMounted = false; }; return () => {
isMounted = false;
};
}, [combinedData, difficulty]); }, [combinedData, difficulty]);
// This effect handles the automatic mining process // This effect handles the automatic mining process
@@ -93,7 +121,7 @@ export default function App() {
// Update the UI periodically to avoid freezing the browser // Update the UI periodically to avoid freezing the browser
if (miningNonce % 100 === 0) { if (miningNonce % 100 === 0) {
setNonce(miningNonce); setNonce(miningNonce);
await new Promise(resolve => setTimeout(resolve, 0)); // Yield to the browser await new Promise((resolve) => setTimeout(resolve, 0)); // Yield to the browser
} }
} }
}; };
@@ -102,28 +130,27 @@ export default function App() {
return () => { return () => {
continueMining = false; continueMining = false;
} };
}, [isMining, challenge, nonce, difficulty]); }, [isMining, challenge, nonce, difficulty]);
const handleMineClick = () => { const handleMineClick = () => {
setIsMining(true); setIsMining(true);
} };
const handleStopClick = () => { const handleStopClick = () => {
setIsMining(false); setIsMining(false);
} };
const handleResetClick = () => { const handleResetClick = () => {
setIsMining(false); setIsMining(false);
setNonce(0); setNonce(0);
} };
const handleNewChallengeClick = () => { const handleNewChallengeClick = () => {
setIsMining(false); setIsMining(false);
setChallenge(generateRandomHex(16)); setChallenge(generateRandomHex(16));
setNonce(0); setNonce(0);
} };
// Helper to render the hash with colored leading characters // Helper to render the hash with colored leading characters
const renderHash = () => { const renderHash = () => {
@@ -153,12 +180,46 @@ export default function App() {
<div className={styles.block}> <div className={styles.block}>
<h2 className={styles.blockTitle}>2. Nonce</h2> <h2 className={styles.blockTitle}>2. Nonce</h2>
<div className={styles.nonceControls}> <div className={styles.nonceControls}>
<button onClick={() => setNonce(n => n - 1)} disabled={isMining} className={styles.nonceButton}> <button
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconSmall} fill="none" viewBox="0 0 24 24" stroke="currentColor"><path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M20 12H4" /></svg> onClick={() => setNonce((n) => n - 1)}
disabled={isMining}
className={styles.nonceButton}
>
<svg
xmlns="http://www.w3.org/2000/svg"
className={styles.iconSmall}
fill="none"
viewBox="0 0 24 24"
stroke="currentColor"
>
<path
strokeLinecap="round"
strokeLinejoin="round"
strokeWidth={2}
d="M20 12H4"
/>
</svg>
</button> </button>
<span className={styles.nonceValue}>{nonce}</span> <span className={styles.nonceValue}>{nonce}</span>
<button onClick={() => setNonce(n => n + 1)} disabled={isMining} className={styles.nonceButton}> <button
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconSmall} fill="none" viewBox="0 0 24 24" stroke="currentColor"><path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M12 4v16m8-8H4" /></svg> onClick={() => setNonce((n) => n + 1)}
disabled={isMining}
className={styles.nonceButton}
>
<svg
xmlns="http://www.w3.org/2000/svg"
className={styles.iconSmall}
fill="none"
viewBox="0 0 24 24"
stroke="currentColor"
>
<path
strokeLinecap="round"
strokeLinejoin="round"
strokeWidth={2}
d="M12 4v16m8-8H4"
/>
</svg>
</button> </button>
</div> </div>
</div> </div>
@@ -172,13 +233,26 @@ export default function App() {
{/* Arrow pointing down */} {/* Arrow pointing down */}
<div className={styles.arrowContainer}> <div className={styles.arrowContainer}>
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconGray} fill="none" viewBox="0 0 24 24" stroke="currentColor"> <svg
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M19 14l-7 7m0 0l-7-7m7 7V3" /> xmlns="http://www.w3.org/2000/svg"
className={styles.iconGray}
fill="none"
viewBox="0 0 24 24"
stroke="currentColor"
>
<path
strokeLinecap="round"
strokeLinejoin="round"
strokeWidth={2}
d="M19 14l-7 7m0 0l-7-7m7 7V3"
/>
</svg> </svg>
</div> </div>
{/* Hash Output Block */} {/* Hash Output Block */}
<div className={`${styles.hashContainer} ${isFound ? styles.hashContainerSuccess : styles.hashContainerError}`}> <div
className={`${styles.hashContainer} ${isFound ? styles.hashContainerSuccess : styles.hashContainerError}`}
>
<div className={styles.hashContent}> <div className={styles.hashContent}>
<div className={styles.hashText}> <div className={styles.hashText}>
<h2 className={styles.blockTitle}>4. Resulting Hash (SHA-256)</h2> <h2 className={styles.blockTitle}>4. Resulting Hash (SHA-256)</h2>
@@ -193,18 +267,30 @@ export default function App() {
{/* Mining Controls */} {/* Mining Controls */}
<div className={styles.buttonContainer}> <div className={styles.buttonContainer}>
{!isMining ? ( {!isMining ? (
<button onClick={handleMineClick} className={`${styles.button} ${styles.buttonCyan}`}> <button
onClick={handleMineClick}
className={`${styles.button} ${styles.buttonCyan}`}
>
Auto-Mine Auto-Mine
</button> </button>
) : ( ) : (
<button onClick={handleStopClick} className={`${styles.button} ${styles.buttonYellow}`}> <button
onClick={handleStopClick}
className={`${styles.button} ${styles.buttonYellow}`}
>
Stop Mining Stop Mining
</button> </button>
)} )}
<button onClick={handleNewChallengeClick} className={`${styles.button} ${styles.buttonIndigo}`}> <button
onClick={handleNewChallengeClick}
className={`${styles.button} ${styles.buttonIndigo}`}
>
New Challenge New Challenge
</button> </button>
<button onClick={handleResetClick} className={`${styles.button} ${styles.buttonGray}`}> <button
onClick={handleResetClick}
className={`${styles.button} ${styles.buttonGray}`}
>
Reset Nonce Reset Nonce
</button> </button>
</div> </div>

View File

@@ -48,7 +48,9 @@
background-color: rgb(31 41 55); background-color: rgb(31 41 55);
padding: 1.5rem; padding: 1.5rem;
border-radius: 0.5rem; border-radius: 0.5rem;
box-shadow: 0 10px 15px -3px rgb(0 0 0 / 0.1), 0 4px 6px -4px rgb(0 0 0 / 0.1); box-shadow:
0 10px 15px -3px rgb(0 0 0 / 0.1),
0 4px 6px -4px rgb(0 0 0 / 0.1);
height: 100%; height: 100%;
display: flex; display: flex;
flex-direction: column; flex-direction: column;
@@ -158,7 +160,9 @@
.hashContainer { .hashContainer {
padding: 1.5rem; padding: 1.5rem;
border-radius: 0.5rem; border-radius: 0.5rem;
box-shadow: 0 10px 15px -3px rgb(0 0 0 / 0.1), 0 4px 6px -4px rgb(0 0 0 / 0.1); box-shadow:
0 10px 15px -3px rgb(0 0 0 / 0.1),
0 4px 6px -4px rgb(0 0 0 / 0.1);
transition: all 300ms; transition: all 300ms;
border: 2px solid; border: 2px solid;
} }

View File

@@ -11,12 +11,39 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] ## [Unreleased]
<!-- This changes the project to: -->
- Fix CEL internal errors when iterating `headers`/`query` map wrappers by implementing map iterators for `HTTPHeaders` and `URLValues` ([#1465](https://github.com/TecharoHQ/anubis/pull/1465)).
## v1.25.0: Necron
Hey all,
I'm sure you've all been aware that things have been slowing down a little with Anubis development, and I want to apologize for that. A lot has been going on in my life lately (my blog will have a post out on Friday with more information), and as a result I haven't really had the energy to work on Anubis in publicly visible ways. There are things going on behind the scenes, but nothing is really shippable yet, sorry!
I've also been feeling some burnout in the wake of perennial waves of anger directed towards me. I'm handling it, I'll be fine, I've just had a lot going on in my life and it's been rough.
I've been missing the sense of wanderlust and discovery that comes with the artistic way I playfully develop software. I suspect that some of the stresses I've been through (setting up a complicated surgery in a country whose language you aren't fluent in is kind of an experience) have been sapping my energy. I'd gonna try to mess with things on my break, but realistically I'm probably just gonna be either watching Stargate SG-1 or doing unreasonable amounts of ocean fishing in Final Fantasy 14. Normally I'd love to keep the details about my medical state fairly private, but I'm more of a public figure now than I was this time last year so I don't really get the invisibility I'm used to for this.
I've also had a fair amount of negativity directed at me for simply being much more visible than the anonymous threat actors running the scrapers that are ruining everything, which though understandable has not helped.
Anyways, it all worked out and I'm about to be in the hospital for a week, so if things go really badly with this release please downgrade to the last version and/or upgrade to the main branch when the fix PR is inevitably merged. I hoped to have time to tame GPG and set up full release automation in the Anubis repo, but that didn't work out this time and that's okay.
If I can challenge you all to do something, go out there and try to actually create something new somehow. Combine ideas you've never mixed before. Be creative, be human, make something purely for yourself to scratch an itch that you've always had yet never gotten around to actually mending.
At the very least, try to be an example of how you want other people to act, even when you're in a situation where software written by someone else is configured to require a user agent to execute javascript to access a webpage.
Be well,
Xe
PS: if you're well-versed in FFXIV lore, the release title should give you an idea of the kind of stuff I've been going through mentally.
- Add iplist2rule tool that lets admins turn an IP address blocklist into an Anubis ruleset. - Add iplist2rule tool that lets admins turn an IP address blocklist into an Anubis ruleset.
- Add Polish locale ([#1292](https://github.com/TecharoHQ/anubis/pull/1309)) - Add Polish locale ([#1292](https://github.com/TecharoHQ/anubis/pull/1309))
- Fix honeypot and imprint links missing `BASE_PREFIX` when deployed behind a path prefix ([#1402](https://github.com/TecharoHQ/anubis/issues/1402)) - Fix honeypot and imprint links missing `BASE_PREFIX` when deployed behind a path prefix ([#1402](https://github.com/TecharoHQ/anubis/issues/1402))
- Add ANEXIA Sponsor logo to docs ([#1409](https://github.com/TecharoHQ/anubis/pull/1409))
- Improve idle performance in memory storage - Improve idle performance in memory storage
- Add HAProxy Configurations to Docs ([#1424](https://github.com/TecharoHQ/anubis/pull/1424))
<!-- This changes the project to: -->
## v1.24.0: Y'shtola Rhul ## v1.24.0: Y'shtola Rhul

View File

@@ -244,7 +244,7 @@ function regexSafe(input: string): string;
`regexSafe` takes a string and escapes it for safe use inside of a regular expression. This is useful when you are creating regular expressions from headers or variables such as `remoteAddress`. `regexSafe` takes a string and escapes it for safe use inside of a regular expression. This is useful when you are creating regular expressions from headers or variables such as `remoteAddress`.
| Input | Output | | Input | Output |
| :------------------------ | :------------------------------ | | :------------------------- | :-------------- |
| `regexSafe("1.2.3.4")` | `1\\.2\\.3\\.4` | | `regexSafe("1.2.3.4")` | `1\\.2\\.3\\.4` |
| `regexSafe("techaro.lol")` | `techaro\\.lol` | | `regexSafe("techaro.lol")` | `techaro\\.lol` |
| `regexSafe("star*")` | `star\\*` | | `regexSafe("star*")` | `star\\*` |
@@ -302,7 +302,7 @@ function arpaReverseIP(ip: string): string;
`arpaReverseIP` takes an IP address and returns its value in [ARPA notation](https://www.ietf.org/rfc/rfc2317.html). This can be useful when matching PTR record patterns. `arpaReverseIP` takes an IP address and returns its value in [ARPA notation](https://www.ietf.org/rfc/rfc2317.html). This can be useful when matching PTR record patterns.
| Input | Output | | Input | Output |
| :----------------------------- | :------------------------------------------------------------------- | | :----------------------------- | :---------------------------------------------------------------- |
| `arpaReverseIP("1.2.3.4")` | `4.3.2.1` | | `arpaReverseIP("1.2.3.4")` | `4.3.2.1` |
| `arpaReverseIP("2001:db8::1")` | `1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2` | | `arpaReverseIP("2001:db8::1")` | `1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2` |

View File

@@ -0,0 +1,99 @@
# HAProxy
import CodeBlock from "@theme/CodeBlock";
To use Anubis with HAProxy, you have two variants:
- simple - stick Anubis between HAProxy and your application backend (simple)
- perfect if you only have a single application in general
- advanced - force Anubis challenge by default and route to the application backend by HAProxy if the challenge is correct
- useful for complex setups
- routing can be done in HAProxy
- define ACLs in HAProxy for domains, paths etc which are required/excluded regarding Anubis
- HAProxy 3.0 recommended
## Simple Variant
```mermaid
---
title: HAProxy with simple config
---
flowchart LR
T(User Traffic)
HAProxy(HAProxy Port 80/443)
Anubis
Application
T --> HAProxy
HAProxy --> Anubis
Anubis --> |Happy Traffic| Application
```
Your Anubis env file configuration may look like this:
import simpleAnubis from "!!raw-loader!./haproxy/simple-config.env";
<CodeBlock language="bash">{simpleAnubis}</CodeBlock>
The important part is that `TARGET` points to your actual application and if Anubis and HAProxy are on the same machine, a UNIX socket can be used.
Your frontend and backend configuration of HAProxy may look like the following:
import simpleHAProxy from "!!raw-loader!./haproxy/simple-haproxy.cfg";
<CodeBlock language="bash">{simpleHAProxy}</CodeBlock>
This simply enables SSL offloading, sets some useful and required headers and routes to Anubis directly.
## Advanced Variant
Due to the fact that HAProxy can decode JWT, we are able to verify the Anubis token directly in HAProxy and route the traffic to the specific backends ourselves.
In this example are three applications behind one HAProxy frontend. Only App1 and App2 are secured via Anubis; App3 is open for everyone. The path `/excluded/path` can also be accessed by anyone.
```mermaid
---
title: HAProxy with advanced config
---
flowchart LR
T(User Traffic)
HAProxy(HAProxy Port 80/443)
B1(App1)
B2(App2)
B3(App3)
Anubis
T --> HAProxy
HAProxy --> |Traffic for App1 and App2 without valid challenge| Anubis
HAProxy --> |app1.example.com | B1
HAProxy --> |app2.example.com| B2
HAProxy --> |app3.example.com| B3
```
:::note
For an improved JWT decoding performance, it's recommended to use HAProxy version 3.0 or above.
:::
Your Anubis env file configuration may look like this:
import advancedAnubis from "!!raw-loader!./haproxy/advanced-config.env";
<CodeBlock language="bash">{advancedAnubis}</CodeBlock>
It's important to use `HS512_SECRET` which HAProxy understands. Please replace `<SECRET-HERE>` with your own secret string (alphanumerical string with 128 characters recommended).
You can set Anubis to force a challenge for every request using the following policy file:
import advancedAnubisPolicy from "!!raw-loader!./haproxy/advanced-config-policy.yml";
<CodeBlock language="yaml">{advancedAnubisPolicy}</CodeBlock>
The HAProxy config file may look like this:
import advancedHAProxy from "!!raw-loader!./haproxy/advanced-haproxy.cfg";
<CodeBlock language="haproxy">{advancedHAProxy}</CodeBlock>
Please replace `<SECRET-HERE>` with the same secret from the Anubis config.

View File

@@ -0,0 +1,15 @@
# /etc/anubis/challenge-any.yml
bots:
- name: any
action: CHALLENGE
user_agent_regex: .*
status_codes:
CHALLENGE: 403
DENY: 403
thresholds: []
dnsbl: false

View File

@@ -0,0 +1,11 @@
# /etc/anubis/default.env
BIND=/run/anubis/default.sock
BIND_NETWORK=unix
DIFFICULTY=4
METRICS_BIND=:9090
# target is irrelevant here, backend routing happens in HAProxy
TARGET=http://0.0.0.0
HS512_SECRET=<SECRET-HERE>
COOKIE_DYNAMIC_DOMAIN=True
POLICY_FNAME=/etc/anubis/challenge-any.yml

View File

@@ -0,0 +1,59 @@
# /etc/haproxy/haproxy.cfg
frontend FE-multiple-applications
mode http
bind :80
# ssl offloading on port 443 using a certificate from /etc/haproxy/ssl/ directory
bind :443 ssl crt /etc/haproxy/ssl/ alpn h2,http/1.1 ssl-min-ver TLSv1.2 no-tls-tickets
# set X-Real-IP header required for Anubis
http-request set-header X-Real-IP "%[src]"
# redirect HTTP to HTTPS
http-request redirect scheme https code 301 unless { ssl_fc }
# add HSTS header
http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
# only force Anubis challenge for app1 and app2
acl acl_anubis_required hdr(host) -i "app1.example.com"
acl acl_anubis_required hdr(host) -i "app2.example.com"
# exclude Anubis for a specific path
acl acl_anubis_ignore path /excluded/path
# use Anubis if auth cookie not found
use_backend BE-anubis if acl_anubis_required !acl_anubis_ignore !{ req.cook(techaro.lol-anubis-auth) -m found }
# get payload of the JWT such as algorithm, expire time, restrictions
http-request set-var(txn.anubis_jwt_alg) req.cook(techaro.lol-anubis-auth),jwt_header_query('$.alg') if acl_anubis_required !acl_anubis_ignore
http-request set-var(txn.anubis_jwt_exp) cook(techaro.lol-anubis-auth),jwt_payload_query('$.exp','int') if acl_anubis_required !acl_anubis_ignore
http-request set-var(txn.anubis_jwt_res) cook(techaro.lol-anubis-auth),jwt_payload_query('$.restriction') if acl_anubis_required !acl_anubis_ignore
http-request set-var(txn.srcip) req.fhdr(X-Real-IP) if acl_anubis_required !acl_anubis_ignore
http-request set-var(txn.now) date() if acl_anubis_required !acl_anubis_ignore
# use Anubis if JWT has wrong algorithm, is expired, restrictions don't match or isn't signed with the correct key
use_backend BE-anubis if acl_anubis_required !acl_anubis_ignore !{ var(txn.anubis_jwt_alg) -m str HS512 }
use_backend BE-anubis if acl_anubis_required !acl_anubis_ignore { var(txn.anubis_jwt_exp),sub(txn.now) -m int lt 0 }
use_backend BE-anubis if acl_anubis_required !acl_anubis_ignore !{ var(txn.srcip),digest(sha256),hex,lower,strcmp(txn.anubis_jwt_res) eq 0 }
use_backend BE-anubis if acl_anubis_required !acl_anubis_ignore !{ cook(techaro.lol-anubis-auth),jwt_verify(txn.anubis_jwt_alg,"<SECRET-HERE>") -m int 1 }
# custom routing in HAProxy
use_backend BE-app1 if { hdr(host) -i "app1.example.com" }
use_backend BE-app2 if { hdr(host) -i "app2.example.com" }
use_backend BE-app3 if { hdr(host) -i "app3.example.com" }
backend BE-app1
mode http
server app1-server 127.0.0.1:3000
backend BE-app2
mode http
server app2-server 127.0.0.1:4000
backend BE-app3
mode http
server app3-server 127.0.0.1:5000
BE-anubis
mode http
server anubis /run/anubis/default.sock

View File

@@ -0,0 +1,10 @@
# /etc/anubis/default.env
BIND=/run/anubis/default.sock
BIND_NETWORK=unix
SOCKET_MODE=0666
DIFFICULTY=4
METRICS_BIND=:9090
COOKIE_DYNAMIC_DOMAIN=true
# address and port of the actual application
TARGET=http://localhost:3000

View File

@@ -0,0 +1,22 @@
# /etc/haproxy/haproxy.cfg
frontend FE-application
mode http
bind :80
# ssl offloading on port 443 using a certificate from /etc/haproxy/ssl/ directory
bind :443 ssl crt /etc/haproxy/ssl/ alpn h2,http/1.1 ssl-min-ver TLSv1.2 no-tls-tickets
# set X-Real-IP header required for Anubis
http-request set-header X-Real-IP "%[src]"
# redirect HTTP to HTTPS
http-request redirect scheme https code 301 unless { ssl_fc }
# add HSTS header
http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
# route to Anubis backend by default
default_backend BE-anubis-application
BE-anubis-application
mode http
server anubis /run/anubis/default.sock

View File

@@ -94,10 +94,8 @@ containers:
- ALL - ALL
seccompProfile: seccompProfile:
type: RuntimeDefault type: RuntimeDefault
``` ```
Then add a Service entry for Anubis: Then add a Service entry for Anubis:
```yaml ```yaml

View File

@@ -1,8 +1,2 @@
# /etc/nginx/conf-anubis.inc # /etc/nginx/conf-anubis.inc # Forward to anubis location / { proxy_set_header
Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://anubis; }
# Forward to anubis
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://anubis;
}

View File

@@ -75,7 +75,7 @@ services:
# Telling Anubis, where to listen for Traefik # Telling Anubis, where to listen for Traefik
- BIND=:8080 - BIND=:8080
# Telling Anubis to do redirect — ensure there is a space after '=' # Telling Anubis to do redirect — ensure there is a space after '='
- 'TARGET= ' - "TARGET= "
# Specifies which domains Anubis is allowed to redirect to. # Specifies which domains Anubis is allowed to redirect to.
- REDIRECT_DOMAINS=example.com - REDIRECT_DOMAINS=example.com
# Should be the full external URL for Anubis (including scheme) # Should be the full external URL for Anubis (including scheme)

View File

@@ -67,7 +67,7 @@ Currently the following settings are configurable via the policy file:
Anubis uses these environment variables for configuration: Anubis uses these environment variables for configuration:
| Environment Variable | Default value | Explanation | | Environment Variable | Default value | Explanation |
|:-------------------------------|:------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | :----------------------------- | :---------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `ASSET_LOOKUP_HEADER` | unset | <EO /> If set, use the contents of this header in requests when looking up custom assets in `OVERLAY_FOLDER`. See [Header-based overlay dispatch](./botstopper.mdx#header-based-overlay-dispatch) for more details. | | `ASSET_LOOKUP_HEADER` | unset | <EO /> If set, use the contents of this header in requests when looking up custom assets in `OVERLAY_FOLDER`. See [Header-based overlay dispatch](./botstopper.mdx#header-based-overlay-dispatch) for more details. |
| `BASE_PREFIX` | unset | If set, adds a global prefix to all Anubis endpoints (everything starting with `/.within.website/x/anubis/`). For example, setting this to `/myapp` would make Anubis accessible at `/myapp/` instead of `/`. This is useful when running Anubis behind a reverse proxy that routes based on path prefixes. | | `BASE_PREFIX` | unset | If set, adds a global prefix to all Anubis endpoints (everything starting with `/.within.website/x/anubis/`). For example, setting this to `/myapp` would make Anubis accessible at `/myapp/` instead of `/`. This is useful when running Anubis behind a reverse proxy that routes based on path prefixes. |
| `BIND` | `:8923` | The network address that Anubis listens on. For `unix`, set this to a path: `/run/anubis/instance.sock` | | `BIND` | `:8923` | The network address that Anubis listens on. For `unix`, set this to a path: `/run/anubis/instance.sock` |
@@ -203,6 +203,7 @@ To get Anubis filtering your traffic, you need to make sure it's added to your H
- [Kubernetes](./environments/kubernetes.mdx) - [Kubernetes](./environments/kubernetes.mdx)
- [Nginx](./environments/nginx.mdx) - [Nginx](./environments/nginx.mdx)
- [Traefik](./environments/traefik.mdx) - [Traefik](./environments/traefik.mdx)
- [HAProxy](./environments/haproxy.mdx)
:::note :::note

View File

@@ -143,3 +143,4 @@ For more details on particular reverse proxies, see here:
- [Apache](./environments/apache.mdx) - [Apache](./environments/apache.mdx)
- [Nginx](./environments/nginx.mdx) - [Nginx](./environments/nginx.mdx)
- [HAProxy](./environments/haproxy.mdx)

View File

@@ -0,0 +1,13 @@
# AI Coding Policy
At some level it would be nice to be able to have the following AI coding policy from an ideological standpoint:
> Anubis does not accept code made primarily with the use of agentic AI tools such as Claude Code, Gemini CLI, GitHub Copilot, Zed, OpenCode, or any other similar tools. Please do not use them when contributing to this repo.
However, I'd be in violation by doing this because I have knowingly committed minor bits of code to the Anubis repo that were generated by AI tools (mostly things for smoke tests).
As such, Anubis is taking more of a centrist approach with regards to AI coding tools: regardless of what tool you use to make contributions to Anubis, when you sign off your code, you are taking responsibility for what you commit. You are also expected to understand what you are changing, what the implications are, and all other relevant factors.
If you use AI coding tools for a majority of your committed work, you MUST disclose it with [the `Assisted-by` footer](https://xeiaso.net/notes/2025/assisted-by-footer/). The Anubis maintainers will be using tooling that looks for these footers and will prioritize scrutiny and level of attention appropriately.
In order to ensure compliance with this policy, language has been placed in `AGENTS.md` and `CLAUDE.md` to entice AI coding tools to add these footers.

View File

@@ -38,6 +38,12 @@ Anubis is brought to you by sponsors and donors like:
<a href="https://distrust.co?utm_campaign=github&utm_medium=referral&utm_content=anubis"> <a href="https://distrust.co?utm_campaign=github&utm_medium=referral&utm_content=anubis">
<img src="/img/sponsors/distrust-logo.webp" alt="Distrust" height="64" /> <img src="/img/sponsors/distrust-logo.webp" alt="Distrust" height="64" />
</a> </a>
<a href="https://about.gitea.com?utm_campaign=github&utm_medium=referral&utm_content=anubis">
<img src="/img/sponsors/gitea-logo.webp" alt="Gitea" height="64" />
</a>
<a href="https://prolocation.net?utm_campaign=github&utm_medium=referral&utm_content=anubis">
<img src="/img/sponsors/prolocation-logo.svg" alt="Prolocation" height="64" />
</a>
<a href="https://terminaltrove.com/?utm_campaign=github&utm_medium=referral&utm_content=anubis&utm_source=abgh"> <a href="https://terminaltrove.com/?utm_campaign=github&utm_medium=referral&utm_content=anubis&utm_source=abgh">
<img <img
src="/img/sponsors/terminal-trove.webp" src="/img/sponsors/terminal-trove.webp"

View File

@@ -1,62 +1,62 @@
import { themes as prismThemes } from 'prism-react-renderer'; import { themes as prismThemes } from "prism-react-renderer";
import type { Config } from '@docusaurus/types'; import type { Config } from "@docusaurus/types";
import type * as Preset from '@docusaurus/preset-classic'; import type * as Preset from "@docusaurus/preset-classic";
// This runs in Node.js - Don't use client-side code here (browser APIs, JSX...) // This runs in Node.js - Don't use client-side code here (browser APIs, JSX...)
const config: Config = { const config: Config = {
title: 'Anubis', title: "Anubis",
tagline: 'Weigh the soul of incoming HTTP requests to protect your website!', tagline: "Weigh the soul of incoming HTTP requests to protect your website!",
favicon: 'img/favicon.ico', favicon: "img/favicon.ico",
// Set the production url of your site here // Set the production url of your site here
url: 'https://anubis.techaro.lol', url: "https://anubis.techaro.lol",
// Set the /<baseUrl>/ pathname under which your site is served // Set the /<baseUrl>/ pathname under which your site is served
// For GitHub pages deployment, it is often '/<projectName>/' // For GitHub pages deployment, it is often '/<projectName>/'
baseUrl: '/', baseUrl: "/",
// GitHub pages deployment config. // GitHub pages deployment config.
// If you aren't using GitHub pages, you don't need these. // If you aren't using GitHub pages, you don't need these.
organizationName: 'TecharoHQ', // Usually your GitHub org/user name. organizationName: "TecharoHQ", // Usually your GitHub org/user name.
projectName: 'anubis', // Usually your repo name. projectName: "anubis", // Usually your repo name.
onBrokenLinks: 'throw', onBrokenLinks: "throw",
onBrokenMarkdownLinks: 'warn', onBrokenMarkdownLinks: "warn",
// Even if you don't use internationalization, you can use this field to set // Even if you don't use internationalization, you can use this field to set
// useful metadata like html lang. For example, if your site is Chinese, you // useful metadata like html lang. For example, if your site is Chinese, you
// may want to replace "en" with "zh-Hans". // may want to replace "en" with "zh-Hans".
i18n: { i18n: {
defaultLocale: 'en', defaultLocale: "en",
locales: ['en'], locales: ["en"],
}, },
markdown: { markdown: {
mermaid: true, mermaid: true,
}, },
themes: ['@docusaurus/theme-mermaid'], themes: ["@docusaurus/theme-mermaid"],
presets: [ presets: [
[ [
'classic', "classic",
{ {
blog: { blog: {
showReadingTime: true, showReadingTime: true,
feedOptions: { feedOptions: {
type: ['rss', 'atom', "json"], type: ["rss", "atom", "json"],
xslt: true, xslt: true,
}, },
editUrl: 'https://github.com/TecharoHQ/anubis/tree/main/docs/', editUrl: "https://github.com/TecharoHQ/anubis/tree/main/docs/",
onInlineTags: 'warn', onInlineTags: "warn",
onInlineAuthors: 'warn', onInlineAuthors: "warn",
onUntruncatedBlogPosts: 'throw', onUntruncatedBlogPosts: "throw",
}, },
docs: { docs: {
sidebarPath: './sidebars.ts', sidebarPath: "./sidebars.ts",
editUrl: 'https://github.com/TecharoHQ/anubis/tree/main/docs/', editUrl: "https://github.com/TecharoHQ/anubis/tree/main/docs/",
}, },
theme: { theme: {
customCss: './src/css/custom.css', customCss: "./src/css/custom.css",
}, },
} satisfies Preset.Options, } satisfies Preset.Options,
], ],
@@ -67,47 +67,47 @@ const config: Config = {
respectPrefersColorScheme: true, respectPrefersColorScheme: true,
}, },
// Replace with your project's social card // Replace with your project's social card
image: 'img/social-card.jpg', image: "img/social-card.jpg",
navbar: { navbar: {
title: 'Anubis', title: "Anubis",
logo: { logo: {
alt: 'A happy jackal woman with brown hair and red eyes', alt: "A happy jackal woman with brown hair and red eyes",
src: 'img/favicon.webp', src: "img/favicon.webp",
}, },
items: [ items: [
{ to: '/blog', label: 'Blog', position: 'left' }, { to: "/blog", label: "Blog", position: "left" },
{ {
type: 'docSidebar', type: "docSidebar",
sidebarId: 'tutorialSidebar', sidebarId: "tutorialSidebar",
position: 'left', position: "left",
label: 'Docs', label: "Docs",
}, },
{ {
to: '/docs/admin/botstopper', to: "/docs/admin/botstopper",
label: "Unbranded Version", label: "Unbranded Version",
position: "left" position: "left",
}, },
{ {
href: 'https://github.com/TecharoHQ/anubis', href: "https://github.com/TecharoHQ/anubis",
label: 'GitHub', label: "GitHub",
position: 'right', position: "right",
}, },
{ {
href: 'https://github.com/sponsors/Xe', href: "https://github.com/sponsors/Xe",
label: "Sponsor the Project", label: "Sponsor the Project",
position: 'right' position: "right",
}, },
], ],
}, },
footer: { footer: {
style: 'dark', style: "dark",
links: [ links: [
{ {
title: 'Docs', title: "Docs",
items: [ items: [
{ {
label: 'Intro', label: "Intro",
to: '/docs/', to: "/docs/",
}, },
{ {
label: "Installation", label: "Installation",
@@ -116,32 +116,32 @@ const config: Config = {
], ],
}, },
{ {
title: 'Community', title: "Community",
items: [ items: [
{ {
label: 'GitHub Discussions', label: "GitHub Discussions",
href: 'https://github.com/TecharoHQ/anubis/discussions', href: "https://github.com/TecharoHQ/anubis/discussions",
}, },
{ {
label: 'Bluesky', label: "Bluesky",
href: 'https://bsky.app/profile/techaro.lol', href: "https://bsky.app/profile/techaro.lol",
}, },
], ],
}, },
{ {
title: 'More', title: "More",
items: [ items: [
{ {
label: 'Blog', label: "Blog",
to: '/blog', to: "/blog",
}, },
{ {
label: 'GitHub', label: "GitHub",
href: 'https://github.com/TecharoHQ/anubis', href: "https://github.com/TecharoHQ/anubis",
}, },
{ {
label: 'Status', label: "Status",
href: 'https://techarohq.github.io/status/' href: "https://techarohq.github.io/status/",
}, },
], ],
}, },
@@ -153,13 +153,13 @@ const config: Config = {
darkTheme: prismThemes.dracula, darkTheme: prismThemes.dracula,
magicComments: [ magicComments: [
{ {
className: 'code-block-diff-add-line', className: "code-block-diff-add-line",
line: 'diff-add' line: "diff-add",
}, },
{ {
className: 'code-block-diff-remove-line', className: "code-block-diff-remove-line",
line: 'diff-remove' line: "diff-remove",
} },
], ],
}, },
} satisfies Preset.ThemeConfig, } satisfies Preset.ThemeConfig,

View File

@@ -1,4 +1,4 @@
import type {SidebarsConfig} from '@docusaurus/plugin-content-docs'; import type { SidebarsConfig } from "@docusaurus/plugin-content-docs";
// This runs in Node.js - Don't use client-side code here (browser APIs, JSX...) // This runs in Node.js - Don't use client-side code here (browser APIs, JSX...)
@@ -14,7 +14,7 @@ import type {SidebarsConfig} from '@docusaurus/plugin-content-docs';
*/ */
const sidebars: SidebarsConfig = { const sidebars: SidebarsConfig = {
// By default, Docusaurus generates a sidebar from the docs folder structure // By default, Docusaurus generates a sidebar from the docs folder structure
tutorialSidebar: [{type: 'autogenerated', dirName: '.'}], tutorialSidebar: [{ type: "autogenerated", dirName: "." }],
// But you can create a sidebar manually // But you can create a sidebar manually
/* /*

View File

@@ -1,4 +1,4 @@
import styles from './styles.module.css'; import styles from "./styles.module.css";
export default function EnterpriseOnly({ link }) { export default function EnterpriseOnly({ link }) {
return ( return (

View File

@@ -8,7 +8,9 @@
font-weight: 700; font-weight: 700;
padding: 0.5rem 1rem; /* py-2 px-4 */ padding: 0.5rem 1rem; /* py-2 px-4 */
border-radius: 9999px; /* rounded-full */ border-radius: 9999px; /* rounded-full */
box-shadow: 0 10px 15px -3px rgba(0, 0, 0, 0.1), 0 4px 6px -2px rgba(0, 0, 0, 0.05); /* shadow-lg approximation */ box-shadow:
0 10px 15px -3px rgba(0, 0, 0, 0.1),
0 4px 6px -2px rgba(0, 0, 0, 0.05); /* shadow-lg approximation */
display: inline-flex; /* flex */ display: inline-flex; /* flex */
align-items: center; /* items-center */ align-items: center; /* items-center */
} }

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

BIN
docs/static/img/sponsors/gitea-logo.webp vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

View File

@@ -0,0 +1,37 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 15.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
width="250px" height="42px" viewBox="0 0.05 250 42" enable-background="new 0 0.05 250 42" xml:space="preserve">
<g>
<rect y="0.15" fill="#FFFFFF" width="42" height="42"/>
<polygon fill="#FF0600" points="0,42.05 10.5,42.05 10.5,10.55 31.5,10.55 31.5,31.55 21,31.55 10.5,42.05 42,42.05 42,0.05
0,0.05 "/>
<path fill="#222222" d="M59.1,24.95h-2.2v9.6h-5.3V8.05h7.5c5.7,0,7.5,3.3,7.5,8.5C66.6,21.65,64.9,24.95,59.1,24.95z M59,12.75h-2
v7.4h2c2.2,0,2.3-2,2.3-3.7C61.3,14.85,61.2,12.75,59,12.75z"/>
<path fill="#222222" d="M80.1,34.55l-3.2-10.3h-1.8v10.3h-5.3V8.05h7.6c5.8,0,7.4,3,7.4,8.1c0,2.8-0.4,5.4-3,7l3.9,11.5h-5.6V34.55
z M77.3,12.75h-2.2v6.7h2.2c2,0,2.1-1.8,2.1-3.4C79.4,14.55,79.3,12.75,77.3,12.75z"/>
<path fill="#222222" d="M101.2,32.149c-1.2,1.5-2.9,2.7-5.8,2.7s-4.6-1.2-5.8-2.7c-1.9-2.3-2-5.8-2-10.899c0-5.1,0.1-8.6,2-10.9
c1.2-1.5,2.9-2.7,5.8-2.7s4.6,1.2,5.8,2.7c1.9,2.3,2,5.8,2,10.9C103.2,26.35,103.1,29.85,101.2,32.149z M97.2,13.55
c-0.3-0.6-0.8-1-1.8-1s-1.5,0.4-1.8,1c-0.6,1.2-0.7,4.5-0.7,7.7c0,3.2,0.1,6.5,0.7,7.8c0.3,0.6,0.8,1,1.8,1s1.5-0.4,1.8-1
c0.6-1.2,0.7-4.5,0.7-7.8C97.9,18.05,97.8,14.75,97.2,13.55z"/>
<path fill="#222222" d="M106.5,34.55V8.05h5.3v21.8h6.9v4.8h-12.2V34.55z"/>
<path fill="#222222" d="M133.7,32.149c-1.2,1.5-2.9,2.7-5.8,2.7c-2.9,0-4.601-1.2-5.8-2.7c-1.9-2.3-2-5.8-2-10.899
c0-5.1,0.1-8.6,2-10.9c1.2-1.5,2.9-2.7,5.8-2.7c2.899,0,4.6,1.2,5.8,2.7c1.899,2.3,2,5.8,2,10.9
C135.7,26.35,135.6,29.85,133.7,32.149z M129.7,13.55c-0.3-0.6-0.8-1-1.8-1s-1.5,0.4-1.801,1c-0.6,1.2-0.7,4.5-0.7,7.7
c0,3.2,0.1,6.5,0.7,7.8c0.301,0.6,0.801,1,1.801,1s1.5-0.4,1.8-1c0.6-1.2,0.7-4.5,0.7-7.8C130.4,18.05,130.3,14.75,129.7,13.55z"/>
<path fill="#222222" d="M151.3,32.95c-1.3,1.3-2.899,1.899-5.1,1.899c-2.9,0-4.601-1.2-5.8-2.7c-1.9-2.3-2-5.8-2-10.899
c0-5.1,0.1-8.6,2-10.9c1.199-1.5,2.899-2.7,5.8-2.7c2.2,0,3.8,0.6,5.1,1.9c1.4,1.3,2.3,3.4,2.4,5.9h-5.3c0-0.7-0.101-1.5-0.4-2
c-0.3-0.6-0.8-1-1.8-1s-1.5,0.4-1.8,1c-0.601,1.2-0.7,4.5-0.7,7.7c0,3.2,0.1,6.5,0.7,7.8c0.3,0.6,0.8,1,1.8,1s1.5-0.4,1.8-1
c0.3-0.601,0.4-1.301,0.4-2.101h5.3C153.6,29.55,152.7,31.649,151.3,32.95z"/>
<path fill="#222222" d="M167.8,34.55l-0.899-4.1H160.8l-0.899,4.1h-5.5l6.899-26.5h5.2l6.8,26.5H167.8z M163.9,15.85l-2,9.8h4.1
L163.9,15.85z"/>
<path fill="#222222" d="M182.2,12.75v21.8h-5.3v-21.8h-4.5v-4.7H186.6v4.8H182.2V12.75z"/>
<path fill="#222222" d="M189.5,34.55V8.05h5.3v26.5H189.5z"/>
<path fill="#222222" d="M211.7,32.149c-1.2,1.5-2.9,2.7-5.8,2.7c-2.9,0-4.601-1.2-5.801-2.7c-1.899-2.3-2-5.8-2-10.899
c0-5.1,0.101-8.6,2-10.9c1.2-1.5,2.9-2.7,5.801-2.7c2.899,0,4.6,1.2,5.8,2.7c1.899,2.3,2,5.8,2,10.9
C213.7,26.35,213.5,29.85,211.7,32.149z M207.6,13.55c-0.3-0.6-0.8-1-1.8-1s-1.5,0.4-1.8,1c-0.6,1.2-0.7,4.5-0.7,7.7
c0,3.2,0.101,6.5,0.7,7.8c0.3,0.6,0.8,1,1.8,1s1.5-0.4,1.8-1c0.601-1.2,0.7-4.5,0.7-7.8C208.3,18.05,208.3,14.75,207.6,13.55z"/>
<path fill="#222222" d="M227.9,34.55l-5.601-13v13H217V8.05h4.6l5.5,13v-13h5.301v26.5H227.9z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 3.3 KiB

View File

@@ -5,7 +5,9 @@
*/ */
const h = (name, data = {}, children = []) => { const h = (name, data = {}, children = []) => {
const result = const result =
typeof name == "function" ? name(data) : Object.assign(document.createElement(name), data); typeof name == "function"
? name(data)
: Object.assign(document.createElement(name), data);
if (!Array.isArray(children)) { if (!Array.isArray(children)) {
children = [children]; children = [children];
} }

View File

@@ -1,5 +1,3 @@
{ {
"bots": [ "bots": [{}]
{}
]
} }

View File

@@ -8,9 +8,7 @@
"userAgent.startsWith(\"git/\") || userAgent.contains(\"libgit\")", "userAgent.startsWith(\"git/\") || userAgent.contains(\"libgit\")",
"\"Git-Protocol\" in headers && headers[\"Git-Protocol\"] == \"version=2\"\n" "\"Git-Protocol\" in headers && headers[\"Git-Protocol\"] == \"version=2\"\n"
], ],
"any": [ "any": ["userAgent.startsWith(\"evilbot/\")"]
"userAgent.startsWith(\"evilbot/\")"
]
} }
} }
] ]

View File

@@ -2,10 +2,7 @@
"bots": [ "bots": [
{ {
"name": "everyones-invited", "name": "everyones-invited",
"remote_addresses": [ "remote_addresses": ["0.0.0.0/0", "::/0"],
"0.0.0.0/0",
"::/0"
],
"action": "ALLOW" "action": "ALLOW"
} }
] ]

View File

@@ -2,8 +2,6 @@
{ {
"name": "ipv6-ula", "name": "ipv6-ula",
"action": "ALLOW", "action": "ALLOW",
"remote_addresses": [ "remote_addresses": ["fc00::/7"]
"fc00::/7"
]
} }
] ]

View File

@@ -0,0 +1,44 @@
package policy
import (
"net/http"
"testing"
"github.com/TecharoHQ/anubis/internal/dns"
"github.com/TecharoHQ/anubis/lib/config"
"github.com/TecharoHQ/anubis/lib/store/memory"
)
func newTestDNS(t *testing.T) *dns.Dns {
t.Helper()
ctx := t.Context()
memStore := memory.New(ctx)
cache := dns.NewDNSCache(300, 300, memStore)
return dns.New(ctx, cache)
}
func TestCELChecker_MapIterationWrappers(t *testing.T) {
cfg := &config.ExpressionOrList{
Expression: `headers.exists(k, k == "Accept") && query.exists(k, k == "format")`,
}
checker, err := NewCELChecker(cfg, newTestDNS(t))
if err != nil {
t.Fatalf("creating CEL checker failed: %v", err)
}
req, err := http.NewRequest(http.MethodGet, "https://example.com/?format=json", nil)
if err != nil {
t.Fatalf("making request failed: %v", err)
}
req.Header.Set("Accept", "application/json")
got, err := checker.Check(req)
if err != nil {
t.Fatalf("checking expression failed: %v", err)
}
if !got {
t.Fatal("expected expression to evaluate true")
}
}

View File

@@ -66,7 +66,9 @@ func (h HTTPHeaders) Get(key ref.Val) ref.Val {
return result return result
} }
func (h HTTPHeaders) Iterator() traits.Iterator { panic("TODO(Xe): implement me") } func (h HTTPHeaders) Iterator() traits.Iterator {
return newMapIterator(h.Header)
}
func (h HTTPHeaders) IsZeroValue() bool { func (h HTTPHeaders) IsZeroValue() bool {
return len(h.Header) == 0 return len(h.Header) == 0

View File

@@ -0,0 +1,60 @@
package expressions
import (
"errors"
"maps"
"reflect"
"slices"
"github.com/google/cel-go/common/types"
"github.com/google/cel-go/common/types/ref"
"github.com/google/cel-go/common/types/traits"
)
var ErrNotImplemented = errors.New("expressions: not implemented")
type stringSliceIterator struct {
keys []string
idx int
}
func (s *stringSliceIterator) Value() any {
return s
}
func (s *stringSliceIterator) ConvertToNative(typeDesc reflect.Type) (any, error) {
return nil, ErrNotImplemented
}
func (s *stringSliceIterator) ConvertToType(typeValue ref.Type) ref.Val {
return types.NewErr("can't convert from %q to %q", types.IteratorType, typeValue)
}
func (s *stringSliceIterator) Equal(other ref.Val) ref.Val {
return types.NewErr("can't compare %q to %q", types.IteratorType, other.Type())
}
func (s *stringSliceIterator) Type() ref.Type {
return types.IteratorType
}
func (s *stringSliceIterator) HasNext() ref.Val {
return types.Bool(s.idx < len(s.keys))
}
func (s *stringSliceIterator) Next() ref.Val {
if s.HasNext() != types.True {
return nil
}
val := s.keys[s.idx]
s.idx++
return types.String(val)
}
func newMapIterator(m map[string][]string) traits.Iterator {
return &stringSliceIterator{
keys: slices.Collect(maps.Keys(m)),
idx: 0,
}
}

View File

@@ -1,7 +1,6 @@
package expressions package expressions
import ( import (
"errors"
"net/url" "net/url"
"reflect" "reflect"
"strings" "strings"
@@ -11,8 +10,6 @@ import (
"github.com/google/cel-go/common/types/traits" "github.com/google/cel-go/common/types/traits"
) )
var ErrNotImplemented = errors.New("expressions: not implemented")
// URLValues is a type wrapper to expose url.Values into CEL programs. // URLValues is a type wrapper to expose url.Values into CEL programs.
type URLValues struct { type URLValues struct {
url.Values url.Values
@@ -69,7 +66,9 @@ func (u URLValues) Get(key ref.Val) ref.Val {
return result return result
} }
func (u URLValues) Iterator() traits.Iterator { panic("TODO(Xe): implement me") } func (u URLValues) Iterator() traits.Iterator {
return newMapIterator(u.Values)
}
func (u URLValues) IsZeroValue() bool { func (u URLValues) IsZeroValue() bool {
return len(u.Values) == 0 return len(u.Values) == 0

View File

@@ -2,8 +2,6 @@
{ {
"name": "ipv6-ula", "name": "ipv6-ula",
"action": "ALLOW", "action": "ALLOW",
"remote_addresses": [ "remote_addresses": ["fc00::/7"]
"fc00::/7"
]
} }
] ]

View File

@@ -2,8 +2,6 @@
{ {
"name": "ipv6-ula", "name": "ipv6-ula",
"action": "ALLOW", "action": "ALLOW",
"remote_addresses": [ "remote_addresses": ["fc00::/7"]
"fc00::/7"
]
} }
] ]

1123
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,10 @@
{ {
"name": "@techaro/anubis", "name": "@techaro/anubis",
"version": "1.24.0", "version": "1.25.0",
"description": "", "description": "",
"main": "index.js", "main": "index.js",
"scripts": { "scripts": {
"test": "npm run assets && go test ./...", "test": "npm run assets && SKIP_INTEGRATION=1 go test ./...",
"test:integration": "npm run assets && go test -v ./internal/test", "test:integration": "npm run assets && go test -v ./internal/test",
"test:integration:podman": "npm run assets && go test -v ./internal/test --playwright-runner=podman", "test:integration:podman": "npm run assets && go test -v ./internal/test --playwright-runner=podman",
"test:integration:docker": "npm run assets && go test -v ./internal/test --playwright-runner=docker", "test:integration:docker": "npm run assets && go test -v ./internal/test --playwright-runner=docker",
@@ -12,23 +12,58 @@
"build": "npm run assets && go build -o ./var/anubis ./cmd/anubis", "build": "npm run assets && go build -o ./var/anubis ./cmd/anubis",
"dev": "npm run assets && go run ./cmd/anubis --use-remote-address --target http://localhost:3000", "dev": "npm run assets && go run ./cmd/anubis --use-remote-address --target http://localhost:3000",
"container": "npm run assets && go run ./cmd/containerbuild", "container": "npm run assets && go run ./cmd/containerbuild",
"package": "yeet", "package": "go tool yeet",
"lint": "make lint" "lint": "make lint",
"prepare": "husky && go mod download",
"format": "prettier -w . 2>&1 >/dev/null && go run goimports -w ."
}, },
"author": "", "author": "",
"license": "ISC", "license": "ISC",
"devDependencies": { "devDependencies": {
"@commitlint/cli": "^20.4.1",
"@commitlint/config-conventional": "^20.4.1",
"baseline-browser-mapping": "^2.9.19",
"cssnano": "^7.1.2", "cssnano": "^7.1.2",
"cssnano-preset-advanced": "^7.0.10", "cssnano-preset-advanced": "^7.0.10",
"esbuild": "^0.27.2", "esbuild": "^0.27.3",
"husky": "^9.1.7",
"playwright": "^1.52.0", "playwright": "^1.52.0",
"postcss-cli": "^11.0.1", "postcss-cli": "^11.0.1",
"postcss-import": "^16.1.1", "postcss-import": "^16.1.1",
"postcss-import-url": "^7.2.0", "postcss-import-url": "^7.2.0",
"postcss-url": "^10.1.3" "postcss-url": "^10.1.3",
"prettier": "^3.8.1"
}, },
"dependencies": { "dependencies": {
"@aws-crypto/sha256-js": "^5.2.0", "@aws-crypto/sha256-js": "^5.2.0",
"preact": "^10.28.2" "preact": "^10.28.3"
},
"commitlint": {
"extends": [
"@commitlint/config-conventional"
],
"rules": {
"body-max-line-length": [
2,
"always",
99999
],
"footer-max-line-length": [
2,
"always",
99999
],
"signed-off-by": [
2,
"always"
]
}
},
"prettier": {
"singleQuote": false,
"tabWidth": 2,
"semi": true,
"trailingComma": "all",
"printWidth": 80
} }
} }

View File

@@ -3,13 +3,13 @@ import { createInterface } from "readline";
async function getPage(path) { async function getPage(path) {
return fetch(`http://localhost:8923${path}`) return fetch(`http://localhost:8923${path}`)
.then(resp => { .then((resp) => {
if (resp.status !== 200) { if (resp.status !== 200) {
throw new Error(`wanted status 200, got status: ${resp.status}`); throw new Error(`wanted status 200, got status: ${resp.status}`);
} }
return resp; return resp;
}) })
.then(resp => resp.text()); .then((resp) => resp.text());
} }
(async () => { (async () => {

View File

@@ -3,22 +3,22 @@ async function getChallengePage() {
headers: { headers: {
"Accept-Language": "en", "Accept-Language": "en",
"User-Agent": "CHALLENGE", "User-Agent": "CHALLENGE",
} },
}) })
.then(resp => { .then((resp) => {
if (resp.status !== 200) { if (resp.status !== 200) {
throw new Error(`wanted status 200, got status: ${resp.status}`); throw new Error(`wanted status 200, got status: ${resp.status}`);
} }
return resp; return resp;
}) })
.then(resp => resp.text()); .then((resp) => resp.text());
} }
(async () => { (async () => {
const page = await getChallengePage(); const page = await getChallengePage();
if (!page.includes(`<html lang="de">`)) { if (!page.includes(`<html lang="de">`)) {
console.log(page) console.log(page);
throw new Error("force language smoke test failed"); throw new Error("force language smoke test failed");
} }

View File

@@ -1,12 +1,14 @@
async function fetchLanguages() { async function fetchLanguages() {
return fetch("http://localhost:8923/.within.website/x/cmd/anubis/static/locales/manifest.json") return fetch(
.then(resp => { "http://localhost:8923/.within.website/x/cmd/anubis/static/locales/manifest.json",
)
.then((resp) => {
if (resp.status !== 200) { if (resp.status !== 200) {
throw new Error(`wanted status 200, got status: ${resp.status}`); throw new Error(`wanted status 200, got status: ${resp.status}`);
} }
return resp; return resp;
}) })
.then(resp => resp.json()); .then((resp) => resp.json());
} }
async function getChallengePage(lang) { async function getChallengePage(lang) {
@@ -14,15 +16,15 @@ async function getChallengePage(lang) {
headers: { headers: {
"Accept-Language": lang, "Accept-Language": lang,
"User-Agent": "CHALLENGE", "User-Agent": "CHALLENGE",
} },
}) })
.then(resp => { .then((resp) => {
if (resp.status !== 200) { if (resp.status !== 200) {
throw new Error(`wanted status 200, got status: ${resp.status}`); throw new Error(`wanted status 200, got status: ${resp.status}`);
} }
return resp; return resp;
}) })
.then(resp => resp.text()); .then((resp) => resp.text());
} }
(async () => { (async () => {
@@ -42,7 +44,7 @@ async function getChallengePage(lang) {
console.log(`getting for ${lang}`); console.log(`getting for ${lang}`);
const page = await getChallengePage(lang); const page = await getChallengePage(lang);
resultSheet[lang] = page.includes(`<html lang="${lang}">`) resultSheet[lang] = page.includes(`<html lang="${lang}">`);
} }
for (const [lang, result] of Object.entries(resultSheet)) { for (const [lang, result] of Object.entries(resultSheet)) {

View File

@@ -3,16 +3,16 @@ import { statSync } from "fs";
async function getPage(path) { async function getPage(path) {
return fetch(`http://localhost:8923${path}`, { return fetch(`http://localhost:8923${path}`, {
headers: { headers: {
'User-Agent': 'CHALLENGE' "User-Agent": "CHALLENGE",
} },
}) })
.then(resp => { .then((resp) => {
if (resp.status !== 200) { if (resp.status !== 200) {
throw new Error(`wanted status 200, got status: ${resp.status}`); throw new Error(`wanted status 200, got status: ${resp.status}`);
} }
return resp; return resp;
}) })
.then(resp => resp.text()); .then((resp) => resp.text());
} }
async function getFileSize(filePath) { async function getFileSize(filePath) {
@@ -63,7 +63,9 @@ async function getFileSize(filePath) {
// Verify that log file size increased // Verify that log file size increased
if (finalSize <= initialSize) { if (finalSize <= initialSize) {
console.error("ERROR: Log file size did not increase after making requests!"); console.error(
"ERROR: Log file size did not increase after making requests!",
);
failed = true; failed = true;
} }
@@ -79,10 +81,14 @@ async function getFileSize(filePath) {
console.log(`Successful requests: ${successCount}/${requests.length}`); console.log(`Successful requests: ${successCount}/${requests.length}`);
if (failed) { if (failed) {
console.error("Test failed: Some requests failed or log file size did not increase"); console.error(
"Test failed: Some requests failed or log file size did not increase",
);
process.exit(1); process.exit(1);
} else { } else {
console.log("Test passed: All requests succeeded and log file size increased"); console.log(
"Test passed: All requests succeeded and log file size increased",
);
process.exit(0); process.exit(0);
} }
})(); })();

View File

@@ -1,5 +1,4 @@
# /etc/nginx/conf-anubis.inc # /etc/nginx/conf-anubis.inc
# Forward to anubis # Forward to anubis
location / { location / {
proxy_set_header Host $host; proxy_set_header Host $host;

View File

@@ -3,22 +3,22 @@ async function getRobotsTxt() {
headers: { headers: {
"Accept-Language": "en", "Accept-Language": "en",
"User-Agent": "Mozilla/5.0", "User-Agent": "Mozilla/5.0",
} },
}) })
.then(resp => { .then((resp) => {
if (resp.status !== 200) { if (resp.status !== 200) {
throw new Error(`wanted status 200, got status: ${resp.status}`); throw new Error(`wanted status 200, got status: ${resp.status}`);
} }
return resp; return resp;
}) })
.then(resp => resp.text()); .then((resp) => resp.text());
} }
(async () => { (async () => {
const page = await getRobotsTxt(); const page = await getRobotsTxt();
if (page.includes(`<html>`)) { if (page.includes(`<html>`)) {
console.log(page) console.log(page);
throw new Error("serve robots.txt smoke test failed"); throw new Error("serve robots.txt smoke test failed");
} }

View File

@@ -1,4 +1,4 @@
<!DOCTYPE html> <!doctype html>
<html> <html>
<head> <head>
<title>Anubis works!</title> <title>Anubis works!</title>
@@ -11,7 +11,10 @@
<p>If you see this, everything has gone according to keikaku.</p> <p>If you see this, everything has gone according to keikaku.</p>
<img height=128 src="/.within.website/x/cmd/anubis/static/img/happy.webp"/> <img
height="128"
src="/.within.website/x/cmd/anubis/static/img/happy.webp"
/>
</main> </main>
</body> </body>
</html> </html>

View File

@@ -1,19 +1,20 @@
async function testWithUserAgent(userAgent) { async function testWithUserAgent(userAgent) {
const statusCode = const statusCode = await fetch(
await fetch("https://relayd.local.cetacean.club:3004/reqmeta", { "https://relayd.local.cetacean.club:3004/reqmeta",
{
headers: { headers: {
"User-Agent": userAgent, "User-Agent": userAgent,
} },
}) },
.then(resp => resp.status); ).then((resp) => resp.status);
return statusCode; return statusCode;
} }
const codes = { const codes = {
allow: await testWithUserAgent("ALLOW"), allow: await testWithUserAgent("ALLOW"),
challenge: await testWithUserAgent("CHALLENGE"), challenge: await testWithUserAgent("CHALLENGE"),
deny: await testWithUserAgent("DENY") deny: await testWithUserAgent("DENY"),
} };
const expected = { const expected = {
allow: 200, allow: 200,
@@ -26,5 +27,7 @@ console.log("CHALLENGE:", codes.challenge);
console.log("DENY: ", codes.deny); console.log("DENY: ", codes.deny);
if (JSON.stringify(codes) !== JSON.stringify(expected)) { if (JSON.stringify(codes) !== JSON.stringify(expected)) {
throw new Error(`wanted ${JSON.stringify(expected)}, got: ${JSON.stringify(codes)}`); throw new Error(
`wanted ${JSON.stringify(expected)}, got: ${JSON.stringify(codes)}`,
);
} }

View File

@@ -6,7 +6,9 @@ interface ProcessOptions {
} }
const getHardwareConcurrency = () => const getHardwareConcurrency = () =>
navigator.hardwareConcurrency !== undefined ? navigator.hardwareConcurrency : 1; navigator.hardwareConcurrency !== undefined
? navigator.hardwareConcurrency
: 1;
export default function process( export default function process(
options: ProcessOptions, options: ProcessOptions,
@@ -25,7 +27,10 @@ export default function process(
workerMethod = "webcrypto"; workerMethod = "webcrypto";
} }
if (navigator.userAgent.includes("Firefox") || navigator.userAgent.includes("Goanna")) { if (
navigator.userAgent.includes("Firefox") ||
navigator.userAgent.includes("Goanna")
) {
console.log("Firefox detected, using pure-JS fallback"); console.log("Firefox detected, using pure-JS fallback");
workerMethod = "purejs"; workerMethod = "purejs";
} }

View File

@@ -3,4 +3,4 @@ import fast from "./fast";
export default { export default {
fast: fast, fast: fast,
slow: fast, // XXX(Xe): slow is deprecated, but keep this around in case anything goes bad slow: fast, // XXX(Xe): slow is deprecated, but keep this around in case anything goes bad
} };

View File

@@ -2,13 +2,27 @@ import algorithms from "./algorithms";
const defaultDifficulty = 4; const defaultDifficulty = 4;
const status: HTMLParagraphElement = document.getElementById("status") as HTMLParagraphElement; const status: HTMLParagraphElement = document.getElementById(
const difficultyInput: HTMLInputElement = document.getElementById("difficulty-input") as HTMLInputElement; "status",
const algorithmSelect: HTMLSelectElement = document.getElementById("algorithm-select") as HTMLSelectElement; ) as HTMLParagraphElement;
const compareSelect: HTMLSelectElement = document.getElementById("compare-select") as HTMLSelectElement; const difficultyInput: HTMLInputElement = document.getElementById(
const header: HTMLTableRowElement = document.getElementById("table-header") as HTMLTableRowElement; "difficulty-input",
const headerCompare: HTMLTableSectionElement = document.getElementById("table-header-compare") as HTMLTableSectionElement; ) as HTMLInputElement;
const results: HTMLTableRowElement = document.getElementById("results") as HTMLTableRowElement; const algorithmSelect: HTMLSelectElement = document.getElementById(
"algorithm-select",
) as HTMLSelectElement;
const compareSelect: HTMLSelectElement = document.getElementById(
"compare-select",
) as HTMLSelectElement;
const header: HTMLTableRowElement = document.getElementById(
"table-header",
) as HTMLTableRowElement;
const headerCompare: HTMLTableSectionElement = document.getElementById(
"table-header-compare",
) as HTMLTableSectionElement;
const results: HTMLTableRowElement = document.getElementById(
"results",
) as HTMLTableRowElement;
const setupControls = () => { const setupControls = () => {
if (defaultDifficulty == null) { if (defaultDifficulty == null) {
@@ -41,7 +55,12 @@ const benchmarkTrial = async (stats, difficulty, algorithm, signal) => {
.join(""); .join("");
const t0 = performance.now(); const t0 = performance.now();
const { hash, nonce } = await process({ basePrefix: "/", version: "devel" }, challenge, Number(difficulty), signal); const { hash, nonce } = await process(
{ basePrefix: "/", version: "devel" },
challenge,
Number(difficulty),
signal,
);
const t1 = performance.now(); const t1 = performance.now();
console.log({ hash, nonce }); console.log({ hash, nonce });

View File

@@ -29,22 +29,25 @@ const getAvailableLanguages = async () => {
} }
try { try {
const response = await fetch(`${basePrefix}/.within.website/x/cmd/anubis/static/locales/manifest.json`); const response = await fetch(
`${basePrefix}/.within.website/x/cmd/anubis/static/locales/manifest.json`,
);
if (response.ok) { if (response.ok) {
const manifest = await response.json(); const manifest = await response.json();
return manifest.supportedLanguages || ['en']; return manifest.supportedLanguages || ["en"];
} }
} catch (error) { } catch (error) {
console.warn('Failed to load language manifest, falling back to default languages'); console.warn(
"Failed to load language manifest, falling back to default languages",
);
} }
// Fallback to default languages if manifest loading fails // Fallback to default languages if manifest loading fails
return ['en']; return ["en"];
}; };
// Use the browser language from the HTML lang attribute which is set by the server settings or request headers // Use the browser language from the HTML lang attribute which is set by the server settings or request headers
const getBrowserLanguage = async () => const getBrowserLanguage = async () => document.documentElement.lang;
document.documentElement.lang;
// Load translations from JSON files // Load translations from JSON files
const loadTranslations = async (lang) => { const loadTranslations = async (lang) => {
@@ -54,12 +57,16 @@ const loadTranslations = async (lang) => {
} }
try { try {
const response = await fetch(`${basePrefix}/.within.website/x/cmd/anubis/static/locales/${lang}.json`); const response = await fetch(
`${basePrefix}/.within.website/x/cmd/anubis/static/locales/${lang}.json`,
);
return await response.json(); return await response.json();
} catch (error) { } catch (error) {
console.warn(`Failed to load translations for ${lang}, falling back to English`); console.warn(
if (lang !== 'en') { `Failed to load translations for ${lang}, falling back to English`,
return await loadTranslations('en'); );
if (lang !== "en") {
return await loadTranslations("en");
} }
throw error; throw error;
} }
@@ -72,10 +79,10 @@ const getRedirectUrl = () => {
} }
if (publicUrl && window.location.href.startsWith(publicUrl)) { if (publicUrl && window.location.href.startsWith(publicUrl)) {
const urlParams = new URLSearchParams(window.location.search); const urlParams = new URLSearchParams(window.location.search);
return urlParams.get('redir'); return urlParams.get("redir");
} }
return window.location.href; return window.location.href;
} };
let translations = {}; let translations = {};
let currentLang; let currentLang;
@@ -95,20 +102,28 @@ const t = (key) => translations[`js_${key}`] || translations[key] || key;
const dependencies = [ const dependencies = [
{ {
name: "Web Workers", name: "Web Workers",
msg: t('web_workers_error'), msg: t("web_workers_error"),
value: window.Worker, value: window.Worker,
}, },
{ {
name: "Cookies", name: "Cookies",
msg: t('cookies_error'), msg: t("cookies_error"),
value: navigator.cookieEnabled, value: navigator.cookieEnabled,
}, },
]; ];
const status: HTMLParagraphElement = document.getElementById("status") as HTMLParagraphElement; const status: HTMLParagraphElement = document.getElementById(
const image: HTMLImageElement = document.getElementById("image") as HTMLImageElement; "status",
const title: HTMLHeadingElement = document.getElementById("title") as HTMLHeadingElement; ) as HTMLParagraphElement;
const progress: HTMLDivElement = document.getElementById("progress") as HTMLDivElement; const image: HTMLImageElement = document.getElementById(
"image",
) as HTMLImageElement;
const title: HTMLHeadingElement = document.getElementById(
"title",
) as HTMLHeadingElement;
const progress: HTMLDivElement = document.getElementById(
"progress",
) as HTMLDivElement;
const anubisVersion = j("anubis_version"); const anubisVersion = j("anubis_version");
const basePrefix = j("anubis_base_prefix"); const basePrefix = j("anubis_base_prefix");
@@ -130,12 +145,12 @@ const t = (key) => translations[`js_${key}`] || translations[key] || key;
progress.style.display = "none"; progress.style.display = "none";
}; };
status.innerHTML = t('calculating'); status.innerHTML = t("calculating");
for (const { value, name, msg } of dependencies) { for (const { value, name, msg } of dependencies) {
if (!value) { if (!value) {
ohNoes({ ohNoes({
titleMsg: `${t('missing_feature')} ${name}`, titleMsg: `${t("missing_feature")} ${name}`,
statusMsg: msg, statusMsg: msg,
imageSrc: imageURL("reject", anubisVersion, basePrefix), imageSrc: imageURL("reject", anubisVersion, basePrefix),
}); });
@@ -148,20 +163,20 @@ const t = (key) => translations[`js_${key}`] || translations[key] || key;
const process = algorithms[rules.algorithm]; const process = algorithms[rules.algorithm];
if (!process) { if (!process) {
ohNoes({ ohNoes({
titleMsg: t('challenge_error'), titleMsg: t("challenge_error"),
statusMsg: t('challenge_error_msg'), statusMsg: t("challenge_error_msg"),
imageSrc: imageURL("reject", anubisVersion, basePrefix), imageSrc: imageURL("reject", anubisVersion, basePrefix),
}); });
return; return;
} }
status.innerHTML = `${t('calculating_difficulty')} ${rules.difficulty}, `; status.innerHTML = `${t("calculating_difficulty")} ${rules.difficulty}, `;
progress.style.display = "inline-block"; progress.style.display = "inline-block";
// the whole text, including "Speed:", as a single node, because some browsers // the whole text, including "Speed:", as a single node, because some browsers
// (Firefox mobile) present screen readers with each node as a separate piece // (Firefox mobile) present screen readers with each node as a separate piece
// of text. // of text.
const rateText = document.createTextNode(`${t('speed')} 0kH/s`); const rateText = document.createTextNode(`${t("speed")} 0kH/s`);
status.appendChild(rateText); status.appendChild(rateText);
let lastSpeedUpdate = 0; let lastSpeedUpdate = 0;
@@ -180,7 +195,7 @@ const t = (key) => translations[`js_${key}`] || translations[key] || key;
// only update the speed every second so it's less visually distracting // only update the speed every second so it's less visually distracting
if (delta - lastSpeedUpdate > 1000) { if (delta - lastSpeedUpdate > 1000) {
lastSpeedUpdate = delta; lastSpeedUpdate = delta;
rateText.data = `${t('speed')} ${(iters / delta).toFixed(3)}kH/s`; rateText.data = `${t("speed")} ${(iters / delta).toFixed(3)}kH/s`;
} }
// the probability of still being on the page is (1 - likelihood) ^ iters. // the probability of still being on the page is (1 - likelihood) ^ iters.
// by definition, half of the time the progress bar only gets to half, so // by definition, half of the time the progress bar only gets to half, so
@@ -192,13 +207,14 @@ const t = (key) => translations[`js_${key}`] || translations[key] || key;
const distance = (1 - Math.pow(probability, 2)) * 100; const distance = (1 - Math.pow(probability, 2)) * 100;
progress["aria-valuenow"] = distance; progress["aria-valuenow"] = distance;
if (progress.firstElementChild !== null) { if (progress.firstElementChild !== null) {
(progress.firstElementChild as HTMLElement).style.width = `${distance}%`; (progress.firstElementChild as HTMLElement).style.width =
`${distance}%`;
} }
if (probability < 0.1 && !showingApology) { if (probability < 0.1 && !showingApology) {
status.append( status.append(
document.createElement("br"), document.createElement("br"),
document.createTextNode(t('verification_longer')), document.createTextNode(t("verification_longer")),
); );
showingApology = true; showingApology = true;
} }
@@ -208,7 +224,9 @@ const t = (key) => translations[`js_${key}`] || translations[key] || key;
console.log({ hash, nonce }); console.log({ hash, nonce });
if (userReadDetails) { if (userReadDetails) {
const container: HTMLDivElement = document.getElementById("progress") as HTMLDivElement; const container: HTMLDivElement = document.getElementById(
"progress",
) as HTMLDivElement;
// Style progress bar as a continue button // Style progress bar as a continue button
container.style.display = "flex"; container.style.display = "flex";
@@ -224,7 +242,7 @@ const t = (key) => translations[`js_${key}`] || translations[key] || key;
container.style.outlineOffset = "2px"; container.style.outlineOffset = "2px";
container.style.width = "min(20rem, 90%)"; container.style.width = "min(20rem, 90%)";
container.style.margin = "1rem auto 2rem"; container.style.margin = "1rem auto 2rem";
container.innerHTML = t('finished_reading'); container.innerHTML = t("finished_reading");
function onDetailsExpand() { function onDetailsExpand() {
const redir = getRedirectUrl(); const redir = getRedirectUrl();
@@ -255,8 +273,8 @@ const t = (key) => translations[`js_${key}`] || translations[key] || key;
} }
} catch (err) { } catch (err) {
ohNoes({ ohNoes({
titleMsg: t('calculation_error'), titleMsg: t("calculation_error"),
statusMsg: `${t('calculation_error_msg')} ${err.message}`, statusMsg: `${t("calculation_error_msg")} ${err.message}`,
imageSrc: imageURL("reject", anubisVersion, basePrefix), imageSrc: imageURL("reject", anubisVersion, basePrefix),
}); });
} }

View File

@@ -1,4 +1,4 @@
import { Sha256 } from '@aws-crypto/sha256-js'; import { Sha256 } from "@aws-crypto/sha256-js";
const calculateSHA256 = (text) => { const calculateSHA256 = (text) => {
const hash = new Sha256(); const hash = new Sha256();
@@ -12,7 +12,7 @@ function toHexString(arr: Uint8Array): string {
.join(""); .join("");
} }
addEventListener('message', async ({ data: eventData }) => { addEventListener("message", async ({ data: eventData }) => {
const { data, difficulty, threads } = eventData; const { data, difficulty, threads } = eventData;
let nonce = eventData.nonce; let nonce = eventData.nonce;
const isMainThread = nonce === 0; const isMainThread = nonce === 0;
@@ -34,7 +34,7 @@ addEventListener('message', async ({ data: eventData }) => {
} }
if (isValid && isDifficultyOdd) { if (isValid && isDifficultyOdd) {
if ((hashArray[requiredZeroBytes] >> 4) !== 0) { if (hashArray[requiredZeroBytes] >> 4 !== 0) {
isValid = false; isValid = false;
} }
} }

View File

@@ -6,7 +6,10 @@ const calculateSHA256 = async (input: string) => {
}; };
const toHexString = (byteArray: Uint8Array) => { const toHexString = (byteArray: Uint8Array) => {
return byteArray.reduce((str, byte) => str + byte.toString(16).padStart(2, "0"), ""); return byteArray.reduce(
(str, byte) => str + byte.toString(16).padStart(2, "0"),
"",
);
}; };
addEventListener("message", async ({ data: eventData }) => { addEventListener("message", async ({ data: eventData }) => {
@@ -31,7 +34,7 @@ addEventListener("message", async ({ data: eventData }) => {
} }
if (isValid && isDifficultyOdd) { if (isValid && isDifficultyOdd) {
if ((hashArray[requiredZeroBytes] >> 4) !== 0) { if (hashArray[requiredZeroBytes] >> 4 !== 0) {
isValid = false; isValid = false;
} }
} }

View File

@@ -1,14 +1,11 @@
$`npm run assets`; $`npm run assets`;
[ ["amd64", "arm64", "ppc64le", "riscv64"].forEach((goarch) => {
"amd64", [deb, rpm, tarball].forEach((method) =>
"arm64", method.build({
"ppc64le",
"riscv64",
].forEach(goarch => {
[deb, rpm, tarball].forEach(method => method.build({
name: "anubis", name: "anubis",
description: "Anubis weighs the souls of incoming HTTP requests and uses a sha256 proof-of-work challenge in order to protect upstream resources from scraper bots.", description:
"Anubis weighs the souls of incoming HTTP requests and uses a sha256 proof-of-work challenge in order to protect upstream resources from scraper bots.",
homepage: "https://anubis.techaro.lol", homepage: "https://anubis.techaro.lol",
license: "MIT", license: "MIT",
goarch, goarch,
@@ -26,7 +23,7 @@ $`npm run assets`;
file.install("./run/anubis@.service", `${systemd}/anubis@.service`); file.install("./run/anubis@.service", `${systemd}/anubis@.service`);
file.install("./run/default.env", `${etc}/default.env`); file.install("./run/default.env", `${etc}/default.env`);
$`mkdir -p ${doc}/docs` $`mkdir -p ${doc}/docs`;
$`cp -a docs/docs ${doc}`; $`cp -a docs/docs ${doc}`;
$`find ${doc} -name _category_.json -delete`; $`find ${doc} -name _category_.json -delete`;
$`mkdir -p ${doc}/data`; $`mkdir -p ${doc}/data`;
@@ -37,7 +34,8 @@ $`npm run assets`;
$`cp -a data/crawlers ${doc}/data/crawlers`; $`cp -a data/crawlers ${doc}/data/crawlers`;
$`cp -a data/meta ${doc}/data/meta`; $`cp -a data/meta ${doc}/data/meta`;
}, },
})); }),
);
}); });
// NOTE(Xe): Fixes #217. This is a "half baked" tarball that includes the harder // NOTE(Xe): Fixes #217. This is a "half baked" tarball that includes the harder
@@ -77,7 +75,7 @@ tarball.build({
// vendor Go dependencies // vendor Go dependencies
$`cd ${out} && go mod vendor`; $`cd ${out} && go mod vendor`;
// build NPM-bound dependencies // build NPM-bound dependencies
$`cd ${out} && npm ci && npm run assets && rm -rf node_modules` $`cd ${out} && npm ci && npm run assets && rm -rf node_modules`;
// write VERSION file // write VERSION file
$`echo ${git.tag()} > ${out}/VERSION`; $`echo ${git.tag()} > ${out}/VERSION`;
}, },