mirror of
https://github.com/TecharoHQ/anubis.git
synced 2026-04-05 16:28:17 +00:00
Compare commits
10 Commits
Xe/adjust-
...
Xe/osiris
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4092180626 | ||
|
|
153da4f5ac | ||
|
|
89b6af05a3 | ||
|
|
9a711f1635 | ||
|
|
dabbe63bb6 | ||
|
|
0aed7d3688 | ||
|
|
2af731033c | ||
|
|
d9c4e37978 | ||
|
|
1eafebedbc | ||
|
|
115ee97d1d |
@@ -22,13 +22,9 @@
|
||||
"unifiedjs.vscode-mdx",
|
||||
"a-h.templ",
|
||||
"redhat.vscode-yaml",
|
||||
"streetsidesoftware.code-spell-checker"
|
||||
],
|
||||
"settings": {
|
||||
"chat.instructionsFilesLocations": {
|
||||
".github/copilot-instructions.md": true
|
||||
}
|
||||
}
|
||||
"hashicorp.hcl",
|
||||
"fredwangwang.vscode-hcl-format"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
25
.dockerignore
Normal file
25
.dockerignore
Normal file
@@ -0,0 +1,25 @@
|
||||
.env
|
||||
*.deb
|
||||
*.rpm
|
||||
|
||||
# Additional package locks
|
||||
pnpm-lock.yaml
|
||||
yarn.lock
|
||||
|
||||
# Go binaries and test artifacts
|
||||
main
|
||||
*.test
|
||||
|
||||
node_modules
|
||||
|
||||
# MacOS
|
||||
.DS_store
|
||||
|
||||
# Intellij
|
||||
.idea
|
||||
|
||||
# how does this get here
|
||||
doc/VERSION
|
||||
|
||||
web/static/js/*
|
||||
!web/static/js/.gitignore
|
||||
3
.github/FUNDING.yml
vendored
3
.github/FUNDING.yml
vendored
@@ -1,3 +1,2 @@
|
||||
patreon: cadey
|
||||
github: xe
|
||||
liberapay: Xe
|
||||
github: xe
|
||||
61
.github/ISSUE_TEMPLATE/bug_report.yaml
vendored
61
.github/ISSUE_TEMPLATE/bug_report.yaml
vendored
@@ -1,61 +0,0 @@
|
||||
name: Bug report
|
||||
description: Create a report to help us improve
|
||||
|
||||
body:
|
||||
- type: textarea
|
||||
id: description-of-bug
|
||||
attributes:
|
||||
label: Describe the bug
|
||||
description: A clear and concise description of what the bug is.
|
||||
placeholder: I can reliably get an error when...
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: steps-to-reproduce
|
||||
attributes:
|
||||
label: Steps to reproduce
|
||||
description: |
|
||||
Steps to reproduce the behavior.
|
||||
placeholder: |
|
||||
1. Go to the following url...
|
||||
2. Click on...
|
||||
3. You get the following error: ...
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: expected-behavior
|
||||
attributes:
|
||||
label: Expected behavior
|
||||
description: |
|
||||
A clear and concise description of what you expected to happen.
|
||||
Ideally also describe *why* you expect it to happen.
|
||||
placeholder: Instead of displaying an error, it would...
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: input
|
||||
id: version-os
|
||||
attributes:
|
||||
label: Your operating system and its version.
|
||||
description: Unsure? Visit https://whatsmyos.com/
|
||||
placeholder: Android 13
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: input
|
||||
id: version-browser
|
||||
attributes:
|
||||
label: Your browser and its version.
|
||||
description: Unsure? Visit https://www.whatsmybrowser.org/
|
||||
placeholder: Firefox 142
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: additional-context
|
||||
attributes:
|
||||
label: Additional context
|
||||
description: Add any other context about the problem here.
|
||||
|
||||
5
.github/ISSUE_TEMPLATE/config.yml
vendored
5
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -1,5 +0,0 @@
|
||||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: Security
|
||||
url: https://techaro.lol/contact
|
||||
about: Do not file security reports here. Email security@techaro.lol.
|
||||
39
.github/ISSUE_TEMPLATE/feature_request.yaml
vendored
39
.github/ISSUE_TEMPLATE/feature_request.yaml
vendored
@@ -1,39 +0,0 @@
|
||||
name: Feature request
|
||||
description: Suggest an idea for this project
|
||||
title: '[Feature request] '
|
||||
|
||||
body:
|
||||
- type: textarea
|
||||
id: description-of-bug
|
||||
attributes:
|
||||
label: Is your feature request related to a problem? Please describe.
|
||||
description: A clear and concise description of what the problem is that made you submit this report.
|
||||
placeholder: I am always frustrated, when...
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: description-of-solution
|
||||
attributes:
|
||||
label: Solution you would like.
|
||||
description: A clear and concise description of what you want to happen.
|
||||
placeholder: Instead of behaving like this, there should be...
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: alternatives
|
||||
attributes:
|
||||
label: Describe alternatives you have considered.
|
||||
description: A clear and concise description of any alternative solutions or features you have considered.
|
||||
placeholder: Another workaround that would work, is...
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
id: additional-context
|
||||
attributes:
|
||||
label: Additional context
|
||||
description: Add any other context (such as mock-ups, proof of concepts or screenshots) about the feature request here.
|
||||
validations:
|
||||
required: false
|
||||
1
.github/PULL_REQUEST_TEMPLATE.md
vendored
1
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -9,4 +9,3 @@ Checklist:
|
||||
- [ ] Added a description of the changes to the `[Unreleased]` section of docs/docs/CHANGELOG.md
|
||||
- [ ] Added test cases to [the relevant parts of the codebase](https://anubis.techaro.lol/docs/developer/code-quality)
|
||||
- [ ] Ran integration tests `npm run test:integration` (unsupported on Windows, please use WSL)
|
||||
- [ ] All of my commits have [verified signatures](https://anubis.techaro.lol/docs/developer/signed-commits)
|
||||
|
||||
16
.github/actions/spelling/allow.txt
vendored
16
.github/actions/spelling/allow.txt
vendored
@@ -3,18 +3,4 @@ https
|
||||
ssh
|
||||
ubuntu
|
||||
workarounds
|
||||
rjack
|
||||
msgbox
|
||||
xeact
|
||||
ABee
|
||||
tencent
|
||||
maintnotifications
|
||||
azurediamond
|
||||
cooldown
|
||||
verifyfcrdns
|
||||
Spintax
|
||||
spintax
|
||||
clampip
|
||||
pseudoprofound
|
||||
reimagining
|
||||
iocaine
|
||||
rjack
|
||||
1
.github/actions/spelling/excludes.txt
vendored
1
.github/actions/spelling/excludes.txt
vendored
@@ -88,7 +88,6 @@
|
||||
^docs/manifest/.*$
|
||||
^docs/static/\.nojekyll$
|
||||
^lib/policy/config/testdata/bad/unparseable\.json$
|
||||
^internal/glob/glob_test.go$
|
||||
ignore$
|
||||
robots.txt
|
||||
^lib/localization/locales/.*\.json$
|
||||
|
||||
752
.github/actions/spelling/expect.txt
vendored
752
.github/actions/spelling/expect.txt
vendored
@@ -1,409 +1,343 @@
|
||||
acs
|
||||
Actorified
|
||||
actorifiedstore
|
||||
actorify
|
||||
Aibrew
|
||||
alibaba
|
||||
alrest
|
||||
amazonbot
|
||||
anthro
|
||||
anubis
|
||||
anubistest
|
||||
apnic
|
||||
APNICRANDNETAU
|
||||
Applebot
|
||||
archlinux
|
||||
arpa
|
||||
asnc
|
||||
asnchecker
|
||||
asns
|
||||
aspirational
|
||||
atuin
|
||||
azuretools
|
||||
badregexes
|
||||
bbolt
|
||||
bdba
|
||||
berr
|
||||
bezier
|
||||
bingbot
|
||||
Bitcoin
|
||||
bitrate
|
||||
Bluesky
|
||||
blueskybot
|
||||
boi
|
||||
Bokm
|
||||
botnet
|
||||
botstopper
|
||||
BPort
|
||||
Brightbot
|
||||
broked
|
||||
buildah
|
||||
byteslice
|
||||
Bytespider
|
||||
cachebuster
|
||||
cachediptoasn
|
||||
Caddyfile
|
||||
caninetools
|
||||
Cardyb
|
||||
celchecker
|
||||
celphase
|
||||
cerr
|
||||
certresolver
|
||||
cespare
|
||||
CGNAT
|
||||
cgr
|
||||
chainguard
|
||||
chall
|
||||
challengemozilla
|
||||
challengetest
|
||||
checkpath
|
||||
checkresult
|
||||
chibi
|
||||
cidranger
|
||||
ckie
|
||||
cloudflare
|
||||
Codespaces
|
||||
confd
|
||||
connnection
|
||||
containerbuild
|
||||
containerregistry
|
||||
coreutils
|
||||
Cotoyogi
|
||||
Cromite
|
||||
crt
|
||||
Cscript
|
||||
daemonizing
|
||||
dayjob
|
||||
DDOS
|
||||
Debian
|
||||
debrpm
|
||||
decaymap
|
||||
devcontainers
|
||||
Diffbot
|
||||
discordapp
|
||||
discordbot
|
||||
distros
|
||||
dnf
|
||||
dnsbl
|
||||
dnserr
|
||||
DNSTTL
|
||||
domainhere
|
||||
dracula
|
||||
dronebl
|
||||
droneblresponse
|
||||
dropin
|
||||
dsilence
|
||||
duckduckbot
|
||||
eerror
|
||||
ellenjoe
|
||||
emacs
|
||||
enbyware
|
||||
etld
|
||||
everyones
|
||||
evilbot
|
||||
evilsite
|
||||
expressionorlist
|
||||
externalagent
|
||||
externalfetcher
|
||||
extldflags
|
||||
facebookgo
|
||||
Factset
|
||||
fahedouch
|
||||
fastcgi
|
||||
FCr
|
||||
fcrdns
|
||||
fediverse
|
||||
ffprobe
|
||||
financials
|
||||
finfos
|
||||
Firecrawl
|
||||
flagenv
|
||||
Fordola
|
||||
forgejo
|
||||
forwardauth
|
||||
fsys
|
||||
fullchain
|
||||
gaissmai
|
||||
Galvus
|
||||
geoip
|
||||
geoipchecker
|
||||
gha
|
||||
GHSA
|
||||
Ghz
|
||||
gipc
|
||||
gitea
|
||||
godotenv
|
||||
goland
|
||||
gomod
|
||||
goodbot
|
||||
googlebot
|
||||
gopsutil
|
||||
govulncheck
|
||||
goyaml
|
||||
GPG
|
||||
GPT
|
||||
gptbot
|
||||
Graphene
|
||||
grpcprom
|
||||
grw
|
||||
gzw
|
||||
Hashcash
|
||||
hashrate
|
||||
headermap
|
||||
healthcheck
|
||||
healthz
|
||||
hec
|
||||
helpdesk
|
||||
Hetzner
|
||||
hmc
|
||||
homelab
|
||||
hostable
|
||||
htmlc
|
||||
htmx
|
||||
httpdebug
|
||||
huawei
|
||||
hypertext
|
||||
iaskspider
|
||||
iaso
|
||||
iat
|
||||
ifm
|
||||
Imagesift
|
||||
imgproxy
|
||||
impressum
|
||||
inbox
|
||||
ingressed
|
||||
inp
|
||||
internets
|
||||
IPTo
|
||||
iptoasn
|
||||
isp
|
||||
iss
|
||||
isset
|
||||
ivh
|
||||
Jenomis
|
||||
JGit
|
||||
jhjj
|
||||
joho
|
||||
journalctl
|
||||
jshelter
|
||||
JWTs
|
||||
kagi
|
||||
kagibot
|
||||
Keyfunc
|
||||
keypair
|
||||
KHTML
|
||||
kinda
|
||||
KUBECONFIG
|
||||
lcj
|
||||
ldflags
|
||||
letsencrypt
|
||||
Lexentale
|
||||
lfc
|
||||
lgbt
|
||||
licend
|
||||
licstart
|
||||
lightpanda
|
||||
limsa
|
||||
Linting
|
||||
listor
|
||||
LLU
|
||||
loadbalancer
|
||||
lol
|
||||
lominsa
|
||||
maintainership
|
||||
malware
|
||||
mcr
|
||||
memes
|
||||
metarefresh
|
||||
metrix
|
||||
mimi
|
||||
Minfilia
|
||||
mistralai
|
||||
mnt
|
||||
Mojeek
|
||||
mojeekbot
|
||||
mozilla
|
||||
myclient
|
||||
mymaster
|
||||
mypass
|
||||
myuser
|
||||
nbf
|
||||
nepeat
|
||||
netsurf
|
||||
nginx
|
||||
nicksnyder
|
||||
nobots
|
||||
NONINFRINGEMENT
|
||||
nosleep
|
||||
nullglob
|
||||
oci
|
||||
OCOB
|
||||
ogtag
|
||||
oklch
|
||||
omgili
|
||||
omgilibot
|
||||
openai
|
||||
opendns
|
||||
opengraph
|
||||
openrc
|
||||
oswald
|
||||
pag
|
||||
palemoon
|
||||
Pangu
|
||||
parseable
|
||||
passthrough
|
||||
Patreon
|
||||
pgrep
|
||||
phrik
|
||||
pidfile
|
||||
pids
|
||||
pipefail
|
||||
pki
|
||||
podkova
|
||||
podman
|
||||
Postgre
|
||||
poststart
|
||||
prebaked
|
||||
privkey
|
||||
promauto
|
||||
promhttp
|
||||
proofofwork
|
||||
publicsuffix
|
||||
purejs
|
||||
pwcmd
|
||||
pwuser
|
||||
qualys
|
||||
qwant
|
||||
qwantbot
|
||||
rac
|
||||
rawler
|
||||
rcvar
|
||||
redhat
|
||||
redir
|
||||
redirectscheme
|
||||
refactors
|
||||
remoteip
|
||||
reputational
|
||||
risc
|
||||
ruleset
|
||||
runlevels
|
||||
RUnlock
|
||||
runtimedir
|
||||
runtimedirectory
|
||||
Ryzen
|
||||
sas
|
||||
sasl
|
||||
screenshots
|
||||
searchbot
|
||||
searx
|
||||
sebest
|
||||
secretplans
|
||||
Semrush
|
||||
Seo
|
||||
setsebool
|
||||
shellcheck
|
||||
shirou
|
||||
shopt
|
||||
Sidetrade
|
||||
simprint
|
||||
sitemap
|
||||
sls
|
||||
sni
|
||||
snipster
|
||||
Spambot
|
||||
sparkline
|
||||
spyderbot
|
||||
srv
|
||||
stackoverflow
|
||||
startprecmd
|
||||
stoppostcmd
|
||||
storetest
|
||||
subgrid
|
||||
subr
|
||||
subrequest
|
||||
SVCNAME
|
||||
tagline
|
||||
tarballs
|
||||
tarrif
|
||||
taviso
|
||||
tbn
|
||||
tbr
|
||||
techaro
|
||||
techarohq
|
||||
telegrambot
|
||||
templ
|
||||
templruntime
|
||||
testarea
|
||||
Thancred
|
||||
thoth
|
||||
thothmock
|
||||
Tik
|
||||
Timpibot
|
||||
TLog
|
||||
traefik
|
||||
trunc
|
||||
uberspace
|
||||
Unbreak
|
||||
unbreakdocker
|
||||
unifiedjs
|
||||
unmarshal
|
||||
unparseable
|
||||
uvx
|
||||
UXP
|
||||
valkey
|
||||
Varis
|
||||
Velen
|
||||
vendored
|
||||
verify
|
||||
vhosts
|
||||
vkbot
|
||||
VKE
|
||||
vnd
|
||||
VPS
|
||||
Vultr
|
||||
weblate
|
||||
webmaster
|
||||
webpage
|
||||
websecure
|
||||
websites
|
||||
Webzio
|
||||
whois
|
||||
wildbase
|
||||
withthothmock
|
||||
wolfbeast
|
||||
wordpress
|
||||
workaround
|
||||
workdir
|
||||
wpbot
|
||||
XCircle
|
||||
xeiaso
|
||||
xeserv
|
||||
xesite
|
||||
xess
|
||||
xff
|
||||
XForwarded
|
||||
XNG
|
||||
XOB
|
||||
XOriginal
|
||||
XReal
|
||||
yae
|
||||
YAMLTo
|
||||
Yda
|
||||
yeet
|
||||
yeetfile
|
||||
yourdomain
|
||||
yyz
|
||||
Zenos
|
||||
zizmor
|
||||
zombocom
|
||||
zos
|
||||
GLM
|
||||
iocaine
|
||||
nikandfor
|
||||
pagegen
|
||||
pseudoprofound
|
||||
reimagining
|
||||
Rhul
|
||||
shoneypot
|
||||
spammer
|
||||
Y'shtola
|
||||
acs
|
||||
Aibrew
|
||||
alrest
|
||||
amazonbot
|
||||
anthro
|
||||
anubis
|
||||
anubistest
|
||||
Applebot
|
||||
archlinux
|
||||
asnc
|
||||
asnchecker
|
||||
asns
|
||||
aspirational
|
||||
atuin
|
||||
azuretools
|
||||
badregexes
|
||||
bbolt
|
||||
bdba
|
||||
berr
|
||||
bingbot
|
||||
Bitcoin
|
||||
bitrate
|
||||
Bluesky
|
||||
blueskybot
|
||||
boi
|
||||
botnet
|
||||
botstopper
|
||||
BPort
|
||||
Brightbot
|
||||
broked
|
||||
byteslice
|
||||
Bytespider
|
||||
cachebuster
|
||||
cachediptoasn
|
||||
Caddyfile
|
||||
caninetools
|
||||
Cardyb
|
||||
celchecker
|
||||
celphase
|
||||
cerr
|
||||
certresolver
|
||||
cespare
|
||||
CGNAT
|
||||
cgr
|
||||
chainguard
|
||||
chall
|
||||
challengemozilla
|
||||
challengetest
|
||||
checkpath
|
||||
checkresult
|
||||
chibi
|
||||
cidranger
|
||||
ckie
|
||||
ckies
|
||||
cloudflare
|
||||
Codespaces
|
||||
confd
|
||||
connnection
|
||||
containerbuild
|
||||
coreutils
|
||||
Cotoyogi
|
||||
Cromite
|
||||
crt
|
||||
Cscript
|
||||
daemonizing
|
||||
DDOS
|
||||
Debian
|
||||
debrpm
|
||||
decaymap
|
||||
devcontainers
|
||||
Diffbot
|
||||
discordapp
|
||||
discordbot
|
||||
distros
|
||||
dnf
|
||||
dnsbl
|
||||
dnserr
|
||||
domainhere
|
||||
dracula
|
||||
dronebl
|
||||
droneblresponse
|
||||
dropin
|
||||
duckduckbot
|
||||
eerror
|
||||
ellenjoe
|
||||
emacs
|
||||
enbyware
|
||||
etld
|
||||
everyones
|
||||
evilbot
|
||||
evilsite
|
||||
expressionorlist
|
||||
externalagent
|
||||
externalfetcher
|
||||
extldflags
|
||||
facebookgo
|
||||
Factset
|
||||
fastcgi
|
||||
fediverse
|
||||
ffprobe
|
||||
finfos
|
||||
Firecrawl
|
||||
flagenv
|
||||
Fordola
|
||||
forgejo
|
||||
fsys
|
||||
fullchain
|
||||
gaissmai
|
||||
Galvus
|
||||
geoip
|
||||
geoipchecker
|
||||
gha
|
||||
gipc
|
||||
gitea
|
||||
godotenv
|
||||
goland
|
||||
gomod
|
||||
goodbot
|
||||
googlebot
|
||||
gopsutil
|
||||
govulncheck
|
||||
goyaml
|
||||
GPG
|
||||
GPT
|
||||
gptbot
|
||||
grpcprom
|
||||
grw
|
||||
Hashcash
|
||||
hashrate
|
||||
headermap
|
||||
healthcheck
|
||||
healthz
|
||||
hec
|
||||
hmc
|
||||
hostable
|
||||
htmlc
|
||||
htmx
|
||||
httpdebug
|
||||
Huawei
|
||||
hypertext
|
||||
iaskspider
|
||||
iat
|
||||
ifm
|
||||
Imagesift
|
||||
imgproxy
|
||||
impressum
|
||||
inp
|
||||
internets
|
||||
IPTo
|
||||
iptoasn
|
||||
iss
|
||||
isset
|
||||
ivh
|
||||
Jenomis
|
||||
JGit
|
||||
joho
|
||||
journalctl
|
||||
jshelter
|
||||
JWTs
|
||||
kagi
|
||||
kagibot
|
||||
Keyfunc
|
||||
keypair
|
||||
KHTML
|
||||
kinda
|
||||
KUBECONFIG
|
||||
lcj
|
||||
ldflags
|
||||
letsencrypt
|
||||
Lexentale
|
||||
lgbt
|
||||
licend
|
||||
licstart
|
||||
lightpanda
|
||||
limsa
|
||||
Linting
|
||||
linuxbrew
|
||||
LLU
|
||||
loadbalancer
|
||||
lol
|
||||
lominsa
|
||||
maintainership
|
||||
malware
|
||||
mcr
|
||||
memes
|
||||
metarefresh
|
||||
metrix
|
||||
mimi
|
||||
Minfilia
|
||||
mistralai
|
||||
Mojeek
|
||||
mojeekbot
|
||||
mozilla
|
||||
nbf
|
||||
nepeat
|
||||
netsurf
|
||||
nginx
|
||||
nicksnyder
|
||||
nobots
|
||||
NONINFRINGEMENT
|
||||
nosleep
|
||||
OCOB
|
||||
ogtags
|
||||
omgili
|
||||
omgilibot
|
||||
openai
|
||||
opengraph
|
||||
openrc
|
||||
oswald
|
||||
pag
|
||||
palemoon
|
||||
Pangu
|
||||
parseable
|
||||
passthrough
|
||||
Patreon
|
||||
pgrep
|
||||
phrik
|
||||
pidfile
|
||||
pids
|
||||
pipefail
|
||||
pki
|
||||
podkova
|
||||
podman
|
||||
poststart
|
||||
prebaked
|
||||
privkey
|
||||
promauto
|
||||
promhttp
|
||||
proofofwork
|
||||
publicsuffix
|
||||
pwcmd
|
||||
pwuser
|
||||
qualys
|
||||
qwant
|
||||
qwantbot
|
||||
rac
|
||||
rawler
|
||||
rcvar
|
||||
rdb
|
||||
redhat
|
||||
redir
|
||||
redirectscheme
|
||||
refactors
|
||||
reputational
|
||||
risc
|
||||
ruleset
|
||||
runlevels
|
||||
RUnlock
|
||||
runtimedir
|
||||
sas
|
||||
sasl
|
||||
searchbot
|
||||
searx
|
||||
sebest
|
||||
secretplans
|
||||
Semrush
|
||||
Seo
|
||||
setsebool
|
||||
shellcheck
|
||||
shirou
|
||||
Sidetrade
|
||||
simprint
|
||||
sitemap
|
||||
sls
|
||||
sni
|
||||
Spambot
|
||||
sparkline
|
||||
spyderbot
|
||||
srv
|
||||
stackoverflow
|
||||
startprecmd
|
||||
stoppostcmd
|
||||
storetest
|
||||
subgrid
|
||||
subr
|
||||
subrequest
|
||||
SVCNAME
|
||||
tagline
|
||||
tarballs
|
||||
tarrif
|
||||
tbn
|
||||
tbr
|
||||
techaro
|
||||
techarohq
|
||||
templ
|
||||
templruntime
|
||||
testarea
|
||||
Thancred
|
||||
thoth
|
||||
thothmock
|
||||
Tik
|
||||
Timpibot
|
||||
traefik
|
||||
uberspace
|
||||
Unbreak
|
||||
unbreakdocker
|
||||
unifiedjs
|
||||
unmarshal
|
||||
unparseable
|
||||
uvx
|
||||
UXP
|
||||
valkey
|
||||
Varis
|
||||
Velen
|
||||
vendored
|
||||
vhosts
|
||||
videotest
|
||||
VKE
|
||||
Vultr
|
||||
waitloop
|
||||
weblate
|
||||
webmaster
|
||||
webpage
|
||||
websecure
|
||||
websites
|
||||
Webzio
|
||||
wildbase
|
||||
withthothmock
|
||||
wordpress
|
||||
Workaround
|
||||
workdir
|
||||
wpbot
|
||||
Xeact
|
||||
xeiaso
|
||||
xeserv
|
||||
xesite
|
||||
xess
|
||||
xff
|
||||
XForwarded
|
||||
XNG
|
||||
XOB
|
||||
XReal
|
||||
yae
|
||||
YAMLTo
|
||||
yeet
|
||||
yeetfile
|
||||
yourdomain
|
||||
yoursite
|
||||
yyz
|
||||
Zenos
|
||||
zizmor
|
||||
zombocom
|
||||
zos
|
||||
|
||||
4
.github/actions/spelling/patterns.txt
vendored
4
.github/actions/spelling/patterns.txt
vendored
@@ -132,7 +132,3 @@ go install(?:\s+[a-z]+\.[-@\w/.]+)+
|
||||
# hit-count: 1 file-count: 1
|
||||
# microsoft
|
||||
\b(?:https?://|)(?:(?:(?:blogs|download\.visualstudio|docs|msdn2?|research)\.|)microsoft|blogs\.msdn)\.co(?:m|\.\w\w)/[-_a-zA-Z0-9()=./%]*
|
||||
|
||||
# hit-count: 1 file-count: 1
|
||||
# data url
|
||||
\bdata:[-a-zA-Z=;:/0-9+]*,\S*
|
||||
6
.github/dependabot.yml
vendored
6
.github/dependabot.yml
vendored
@@ -8,8 +8,6 @@ updates:
|
||||
github-actions:
|
||||
patterns:
|
||||
- "*"
|
||||
cooldown:
|
||||
default-days: 7
|
||||
|
||||
- package-ecosystem: gomod
|
||||
directory: /
|
||||
@@ -19,8 +17,6 @@ updates:
|
||||
gomod:
|
||||
patterns:
|
||||
- "*"
|
||||
cooldown:
|
||||
default-days: 7
|
||||
|
||||
- package-ecosystem: npm
|
||||
directory: /
|
||||
@@ -30,5 +26,3 @@ updates:
|
||||
npm:
|
||||
patterns:
|
||||
- "*"
|
||||
cooldown:
|
||||
default-days: 7
|
||||
|
||||
72
.github/workflows/asset-verification.yml
vendored
72
.github/workflows/asset-verification.yml
vendored
@@ -1,72 +0,0 @@
|
||||
name: Asset Build Verification
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: ["main"]
|
||||
pull_request:
|
||||
branches: ["main"]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
asset_verification:
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
- name: build essential
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y build-essential
|
||||
|
||||
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
|
||||
with:
|
||||
node-version: '24.11.0'
|
||||
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: '1.25.4'
|
||||
|
||||
- name: install node deps
|
||||
run: |
|
||||
npm ci
|
||||
|
||||
- name: Check for uncommitted changes before asset build
|
||||
id: check-changes-before
|
||||
run: |
|
||||
if [[ -n $(git status --porcelain) ]]; then
|
||||
echo "has_changes=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "has_changes=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Fail if there are uncommitted changes before build
|
||||
if: steps.check-changes-before.outputs.has_changes == 'true'
|
||||
run: |
|
||||
echo "There are uncommitted changes before running npm run assets"
|
||||
git status
|
||||
exit 1
|
||||
|
||||
- name: Run asset build
|
||||
run: |
|
||||
npm run assets
|
||||
|
||||
- name: Check for uncommitted changes after asset build
|
||||
id: check-changes-after
|
||||
run: |
|
||||
if [[ -n $(git status --porcelain) ]]; then
|
||||
echo "has_changes=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "has_changes=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Fail if assets generated changes
|
||||
if: steps.check-changes-after.outputs.has_changes == 'true'
|
||||
run: |
|
||||
echo "npm run assets generated uncommitted changes. This indicates the repository has outdated generated files."
|
||||
echo "Please run 'npm run assets' locally and commit the changes."
|
||||
git status
|
||||
git diff
|
||||
exit 1
|
||||
65
.github/workflows/docker-pr.yml
vendored
65
.github/workflows/docker-pr.yml
vendored
@@ -11,33 +11,68 @@ permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
build:
|
||||
buildx-bake:
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-tags: true
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
- name: build essential
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
|
||||
|
||||
- name: Build and push
|
||||
id: build
|
||||
uses: docker/bake-action@76f9fa3a758507623da19f6092dc4089a7e61592 # v6.6.0
|
||||
with:
|
||||
source: .
|
||||
push: true
|
||||
sbom: true
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
set: |
|
||||
osiris.tags=ttl.sh/techaro/pr-${{ github.event.number }}/osiris:24h
|
||||
|
||||
containerbuild:
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-tags: true
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
- name: Set up Homebrew
|
||||
uses: Homebrew/actions/setup-homebrew@main
|
||||
|
||||
- name: Setup Homebrew cellar cache
|
||||
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||
with:
|
||||
path: |
|
||||
/home/linuxbrew/.linuxbrew/Cellar
|
||||
/home/linuxbrew/.linuxbrew/bin
|
||||
/home/linuxbrew/.linuxbrew/etc
|
||||
/home/linuxbrew/.linuxbrew/include
|
||||
/home/linuxbrew/.linuxbrew/lib
|
||||
/home/linuxbrew/.linuxbrew/opt
|
||||
/home/linuxbrew/.linuxbrew/sbin
|
||||
/home/linuxbrew/.linuxbrew/share
|
||||
/home/linuxbrew/.linuxbrew/var
|
||||
key: ${{ runner.os }}-go-homebrew-cellar-${{ hashFiles('go.sum') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-go-homebrew-cellar-
|
||||
|
||||
- name: Install Brew dependencies
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y build-essential
|
||||
|
||||
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
|
||||
with:
|
||||
node-version: '24.11.0'
|
||||
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: '1.25.4'
|
||||
|
||||
- uses: ko-build/setup-ko@d006021bd0c28d1ce33a07e7943d48b079944c8d # v0.9
|
||||
brew bundle
|
||||
|
||||
- name: Docker meta
|
||||
id: meta
|
||||
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
|
||||
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
|
||||
with:
|
||||
images: ghcr.io/${{ github.repository }}
|
||||
|
||||
|
||||
73
.github/workflows/docker.yml
vendored
73
.github/workflows/docker.yml
vendored
@@ -17,36 +17,77 @@ permissions:
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
build:
|
||||
buildx-bake:
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-tags: true
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
- name: build essential
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y build-essential
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
|
||||
|
||||
- name: Log into registry
|
||||
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Build and push
|
||||
id: build
|
||||
uses: docker/bake-action@76f9fa3a758507623da19f6092dc4089a7e61592 # v6.6.0
|
||||
with:
|
||||
source: .
|
||||
push: true
|
||||
sbom: true
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
set: ""
|
||||
|
||||
containerbuild:
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-tags: true
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
- name: Set lowercase image name
|
||||
run: |
|
||||
echo "IMAGE=ghcr.io/${GITHUB_REPOSITORY,,}" >> $GITHUB_ENV
|
||||
|
||||
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
|
||||
with:
|
||||
node-version: '24.11.0'
|
||||
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: '1.25.4'
|
||||
- name: Set up Homebrew
|
||||
uses: Homebrew/actions/setup-homebrew@main
|
||||
|
||||
- uses: ko-build/setup-ko@d006021bd0c28d1ce33a07e7943d48b079944c8d # v0.9
|
||||
- name: Setup Homebrew cellar cache
|
||||
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||
with:
|
||||
path: |
|
||||
/home/linuxbrew/.linuxbrew/Cellar
|
||||
/home/linuxbrew/.linuxbrew/bin
|
||||
/home/linuxbrew/.linuxbrew/etc
|
||||
/home/linuxbrew/.linuxbrew/include
|
||||
/home/linuxbrew/.linuxbrew/lib
|
||||
/home/linuxbrew/.linuxbrew/opt
|
||||
/home/linuxbrew/.linuxbrew/sbin
|
||||
/home/linuxbrew/.linuxbrew/share
|
||||
/home/linuxbrew/.linuxbrew/var
|
||||
key: ${{ runner.os }}-go-homebrew-cellar-${{ hashFiles('go.sum') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-go-homebrew-cellar-
|
||||
|
||||
- name: Install Brew dependencies
|
||||
run: |
|
||||
brew bundle
|
||||
|
||||
- name: Log into registry
|
||||
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
||||
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
@@ -54,7 +95,7 @@ jobs:
|
||||
|
||||
- name: Docker meta
|
||||
id: meta
|
||||
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
|
||||
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
|
||||
with:
|
||||
images: ${{ env.IMAGE }}
|
||||
|
||||
@@ -68,7 +109,7 @@ jobs:
|
||||
SLOG_LEVEL: debug
|
||||
|
||||
- name: Generate artifact attestation
|
||||
uses: actions/attest-build-provenance@977bb373ede98d70efdf65b84cb5f73e068dcc2a # v3.0.0
|
||||
uses: actions/attest-build-provenance@e8998f949152b193b063cb0ec769d69d929409be # v2.4.0
|
||||
with:
|
||||
subject-name: ${{ env.IMAGE }}
|
||||
subject-digest: ${{ steps.build.outputs.digest }}
|
||||
|
||||
10
.github/workflows/docs-deploy.yml
vendored
10
.github/workflows/docs-deploy.yml
vendored
@@ -17,7 +17,7 @@ jobs:
|
||||
runs-on: ubuntu-24.04
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
@@ -25,7 +25,7 @@ jobs:
|
||||
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
|
||||
|
||||
- name: Log into registry
|
||||
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
||||
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: techarohq
|
||||
@@ -33,7 +33,7 @@ jobs:
|
||||
|
||||
- name: Docker meta
|
||||
id: meta
|
||||
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
|
||||
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
|
||||
with:
|
||||
images: ghcr.io/techarohq/anubis/docs
|
||||
tags: |
|
||||
@@ -53,14 +53,14 @@ jobs:
|
||||
push: true
|
||||
|
||||
- name: Apply k8s manifests to limsa lominsa
|
||||
uses: actions-hub/kubectl@2639090a038d46a3b9b98b220ae0837676ded8b7 # v1.34.3
|
||||
uses: actions-hub/kubectl@b5b19eeb6a0ffde16637e398f8b96ef01eb8fdb7 # v1.33.3
|
||||
env:
|
||||
KUBE_CONFIG: ${{ secrets.LIMSA_LOMINSA_KUBECONFIG }}
|
||||
with:
|
||||
args: apply -k docs/manifest
|
||||
|
||||
- name: Apply k8s manifests to limsa lominsa
|
||||
uses: actions-hub/kubectl@2639090a038d46a3b9b98b220ae0837676ded8b7 # v1.34.3
|
||||
uses: actions-hub/kubectl@b5b19eeb6a0ffde16637e398f8b96ef01eb8fdb7 # v1.33.3
|
||||
env:
|
||||
KUBE_CONFIG: ${{ secrets.LIMSA_LOMINSA_KUBECONFIG }}
|
||||
with:
|
||||
|
||||
4
.github/workflows/docs-test.yml
vendored
4
.github/workflows/docs-test.yml
vendored
@@ -13,7 +13,7 @@ jobs:
|
||||
runs-on: ubuntu-24.04
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
@@ -22,7 +22,7 @@ jobs:
|
||||
|
||||
- name: Docker meta
|
||||
id: meta
|
||||
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
|
||||
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
|
||||
with:
|
||||
images: ghcr.io/techarohq/anubis/docs
|
||||
tags: |
|
||||
|
||||
76
.github/workflows/go-mod-tidy-check.yml
vendored
76
.github/workflows/go-mod-tidy-check.yml
vendored
@@ -1,76 +0,0 @@
|
||||
name: Go Mod Tidy Check
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: ["main"]
|
||||
pull_request:
|
||||
branches: ["main"]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
go_mod_tidy_check:
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: '1.25.4'
|
||||
|
||||
- name: Check go.mod and go.sum in main directory
|
||||
run: |
|
||||
# Store original file state
|
||||
cp go.mod go.mod.orig
|
||||
cp go.sum go.sum.orig
|
||||
|
||||
# Run go mod tidy
|
||||
go mod tidy
|
||||
|
||||
# Check if files changed
|
||||
if ! diff -q go.mod.orig go.mod > /dev/null 2>&1; then
|
||||
echo "ERROR: go.mod in main directory has changed after running 'go mod tidy'"
|
||||
echo "Please run 'go mod tidy' locally and commit the changes"
|
||||
diff go.mod.orig go.mod
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! diff -q go.sum.orig go.sum > /dev/null 2>&1; then
|
||||
echo "ERROR: go.sum in main directory has changed after running 'go mod tidy'"
|
||||
echo "Please run 'go mod tidy' locally and commit the changes"
|
||||
diff go.sum.orig go.sum
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "SUCCESS: go.mod and go.sum in main directory are tidy"
|
||||
|
||||
- name: Check go.mod and go.sum in test directory
|
||||
run: |
|
||||
cd test
|
||||
|
||||
# Store original file state
|
||||
cp go.mod go.mod.orig
|
||||
cp go.sum go.sum.orig
|
||||
|
||||
# Run go mod tidy
|
||||
go mod tidy
|
||||
|
||||
# Check if files changed
|
||||
if ! diff -q go.mod.orig go.mod > /dev/null 2>&1; then
|
||||
echo "ERROR: go.mod in test directory has changed after running 'go mod tidy'"
|
||||
echo "Please run 'go mod tidy' locally and commit the changes"
|
||||
diff go.mod.orig go.mod
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! diff -q go.sum.orig go.sum > /dev/null 2>&1; then
|
||||
echo "ERROR: go.sum in test directory has changed after running 'go mod tidy'"
|
||||
echo "Please run 'go mod tidy' locally and commit the changes"
|
||||
diff go.sum.orig go.sum
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "SUCCESS: go.mod and go.sum in test directory are tidy"
|
||||
107
.github/workflows/go.yml
vendored
107
.github/workflows/go.yml
vendored
@@ -2,9 +2,9 @@ name: Go
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: ["main"]
|
||||
branches: [ "main" ]
|
||||
pull_request:
|
||||
branches: ["main"]
|
||||
branches: [ "main" ]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -15,50 +15,77 @@ jobs:
|
||||
#runs-on: alrest-techarohq
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
persist-credentials: false
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
- name: build essential
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y build-essential
|
||||
- name: build essential
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y build-essential
|
||||
|
||||
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
|
||||
with:
|
||||
node-version: '24.11.0'
|
||||
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: '1.25.4'
|
||||
- name: Set up Homebrew
|
||||
uses: Homebrew/actions/setup-homebrew@main
|
||||
|
||||
- name: Cache playwright binaries
|
||||
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
id: playwright-cache
|
||||
with:
|
||||
path: |
|
||||
~/.cache/ms-playwright
|
||||
key: ${{ runner.os }}-playwright-${{ hashFiles('**/go.sum') }}
|
||||
- name: Setup Homebrew cellar cache
|
||||
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||
with:
|
||||
path: |
|
||||
/home/linuxbrew/.linuxbrew/Cellar
|
||||
/home/linuxbrew/.linuxbrew/bin
|
||||
/home/linuxbrew/.linuxbrew/etc
|
||||
/home/linuxbrew/.linuxbrew/include
|
||||
/home/linuxbrew/.linuxbrew/lib
|
||||
/home/linuxbrew/.linuxbrew/opt
|
||||
/home/linuxbrew/.linuxbrew/sbin
|
||||
/home/linuxbrew/.linuxbrew/share
|
||||
/home/linuxbrew/.linuxbrew/var
|
||||
key: ${{ runner.os }}-go-homebrew-cellar-${{ hashFiles('go.sum') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-go-homebrew-cellar-
|
||||
|
||||
- name: install node deps
|
||||
run: |
|
||||
npm ci
|
||||
- name: Install Brew dependencies
|
||||
run: |
|
||||
brew bundle
|
||||
|
||||
- name: install playwright browsers
|
||||
run: |
|
||||
npx --no-install playwright@1.52.0 install --with-deps
|
||||
npx --no-install playwright@1.52.0 run-server --port 9001 &
|
||||
- name: Setup Golang caches
|
||||
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||
with:
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
~/go/pkg/mod
|
||||
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-golang-
|
||||
|
||||
- name: Build
|
||||
run: npm run build
|
||||
- name: Cache playwright binaries
|
||||
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||
id: playwright-cache
|
||||
with:
|
||||
path: |
|
||||
~/.cache/ms-playwright
|
||||
key: ${{ runner.os }}-playwright-${{ hashFiles('**/go.sum') }}
|
||||
|
||||
- name: Test
|
||||
run: npm run test
|
||||
- name: install node deps
|
||||
run: |
|
||||
npm ci
|
||||
|
||||
- name: Lint with staticcheck
|
||||
uses: dominikh/staticcheck-action@024238d2898c874f26d723e7d0ff4308c35589a2 # v1.4.0
|
||||
with:
|
||||
version: "latest"
|
||||
- name: install playwright browsers
|
||||
run: |
|
||||
npx --no-install playwright@1.52.0 install --with-deps
|
||||
npx --no-install playwright@1.52.0 run-server --port 9001 &
|
||||
|
||||
- name: Govulncheck
|
||||
run: |
|
||||
go tool govulncheck ./...
|
||||
- name: Build
|
||||
run: npm run build
|
||||
|
||||
- name: Test
|
||||
run: npm run test
|
||||
|
||||
- name: Lint with staticcheck
|
||||
uses: dominikh/staticcheck-action@024238d2898c874f26d723e7d0ff4308c35589a2 # v1.4.0
|
||||
with:
|
||||
version: "latest"
|
||||
|
||||
- name: Govulncheck
|
||||
run: |
|
||||
go tool govulncheck ./...
|
||||
|
||||
37
.github/workflows/package-builds-stable.yml
vendored
37
.github/workflows/package-builds-stable.yml
vendored
@@ -14,7 +14,7 @@ jobs:
|
||||
#runs-on: alrest-techarohq
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
fetch-tags: true
|
||||
@@ -25,12 +25,39 @@ jobs:
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y build-essential
|
||||
|
||||
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
|
||||
- name: Set up Homebrew
|
||||
uses: Homebrew/actions/setup-homebrew@main
|
||||
|
||||
- name: Setup Homebrew cellar cache
|
||||
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||
with:
|
||||
node-version: '24.11.0'
|
||||
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
path: |
|
||||
/home/linuxbrew/.linuxbrew/Cellar
|
||||
/home/linuxbrew/.linuxbrew/bin
|
||||
/home/linuxbrew/.linuxbrew/etc
|
||||
/home/linuxbrew/.linuxbrew/include
|
||||
/home/linuxbrew/.linuxbrew/lib
|
||||
/home/linuxbrew/.linuxbrew/opt
|
||||
/home/linuxbrew/.linuxbrew/sbin
|
||||
/home/linuxbrew/.linuxbrew/share
|
||||
/home/linuxbrew/.linuxbrew/var
|
||||
key: ${{ runner.os }}-go-homebrew-cellar-${{ hashFiles('go.sum') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-go-homebrew-cellar-
|
||||
|
||||
- name: Install Brew dependencies
|
||||
run: |
|
||||
brew bundle
|
||||
|
||||
- name: Setup Golang caches
|
||||
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||
with:
|
||||
go-version: '1.25.4'
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
~/go/pkg/mod
|
||||
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-golang-
|
||||
|
||||
- name: install node deps
|
||||
run: |
|
||||
|
||||
81
.github/workflows/package-builds-unstable.yml
vendored
81
.github/workflows/package-builds-unstable.yml
vendored
@@ -2,9 +2,9 @@ name: Package builds (unstable)
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: ["main"]
|
||||
branches: [ "main" ]
|
||||
pull_request:
|
||||
branches: ["main"]
|
||||
branches: [ "main" ]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -15,33 +15,60 @@ jobs:
|
||||
#runs-on: alrest-techarohq
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
persist-credentials: false
|
||||
fetch-tags: true
|
||||
fetch-depth: 0
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
fetch-tags: true
|
||||
fetch-depth: 0
|
||||
|
||||
- name: build essential
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y build-essential
|
||||
- name: build essential
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y build-essential
|
||||
|
||||
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
|
||||
with:
|
||||
node-version: '24.11.0'
|
||||
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
with:
|
||||
go-version: '1.25.4'
|
||||
- name: Set up Homebrew
|
||||
uses: Homebrew/actions/setup-homebrew@main
|
||||
|
||||
- name: install node deps
|
||||
run: |
|
||||
npm ci
|
||||
- name: Setup Homebrew cellar cache
|
||||
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||
with:
|
||||
path: |
|
||||
/home/linuxbrew/.linuxbrew/Cellar
|
||||
/home/linuxbrew/.linuxbrew/bin
|
||||
/home/linuxbrew/.linuxbrew/etc
|
||||
/home/linuxbrew/.linuxbrew/include
|
||||
/home/linuxbrew/.linuxbrew/lib
|
||||
/home/linuxbrew/.linuxbrew/opt
|
||||
/home/linuxbrew/.linuxbrew/sbin
|
||||
/home/linuxbrew/.linuxbrew/share
|
||||
/home/linuxbrew/.linuxbrew/var
|
||||
key: ${{ runner.os }}-go-homebrew-cellar-${{ hashFiles('go.sum') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-go-homebrew-cellar-
|
||||
|
||||
- name: Build Packages
|
||||
run: |
|
||||
go tool yeet
|
||||
- name: Install Brew dependencies
|
||||
run: |
|
||||
brew bundle
|
||||
|
||||
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||
with:
|
||||
name: packages
|
||||
path: var/*
|
||||
- name: Setup Golang caches
|
||||
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||
with:
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
~/go/pkg/mod
|
||||
key: ${{ runner.os }}-golang-${{ hashFiles('**/go.sum') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-golang-
|
||||
|
||||
- name: install node deps
|
||||
run: |
|
||||
npm ci
|
||||
|
||||
- name: Build Packages
|
||||
run: |
|
||||
go tool yeet
|
||||
|
||||
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
|
||||
with:
|
||||
name: packages
|
||||
path: var/*
|
||||
|
||||
32
.github/workflows/smoke-tests.yml
vendored
32
.github/workflows/smoke-tests.yml
vendored
@@ -14,31 +14,24 @@ jobs:
|
||||
strategy:
|
||||
matrix:
|
||||
test:
|
||||
- default-config-macro
|
||||
- docker-registry
|
||||
- double_slash
|
||||
- forced-language
|
||||
- git-clone
|
||||
- git-push
|
||||
- healthcheck
|
||||
- i18n
|
||||
- log-file
|
||||
- palemoon/amd64
|
||||
#- palemoon/i386
|
||||
- robots_txt
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
- uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
|
||||
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
|
||||
with:
|
||||
node-version: '24.11.0'
|
||||
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
node-version: latest
|
||||
|
||||
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
with:
|
||||
go-version: '1.25.4'
|
||||
go-version: stable
|
||||
|
||||
- uses: ko-build/setup-ko@d006021bd0c28d1ce33a07e7943d48b079944c8d # v0.9
|
||||
|
||||
@@ -50,14 +43,3 @@ jobs:
|
||||
run: |
|
||||
cd test/${{ matrix.test }}
|
||||
backoff-retry --try-count 10 ./test.sh
|
||||
|
||||
- name: Sanitize artifact name
|
||||
if: always()
|
||||
run: echo "ARTIFACT_NAME=${{ matrix.test }}" | sed 's|/|-|g' >> $GITHUB_ENV
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
|
||||
if: always()
|
||||
with:
|
||||
name: ${{ env.ARTIFACT_NAME }}
|
||||
path: test/${{ matrix.test }}/var
|
||||
|
||||
4
.github/workflows/ssh-ci-runner-cron.yml
vendored
4
.github/workflows/ssh-ci-runner-cron.yml
vendored
@@ -18,13 +18,13 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-tags: true
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
- name: Log into registry
|
||||
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
||||
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
|
||||
14
.github/workflows/ssh-ci.yml
vendored
14
.github/workflows/ssh-ci.yml
vendored
@@ -12,17 +12,15 @@ permissions:
|
||||
jobs:
|
||||
ssh:
|
||||
if: github.repository == 'TecharoHQ/anubis'
|
||||
runs-on: alrest-techarohq
|
||||
runs-on: ubuntu-24.04
|
||||
strategy:
|
||||
matrix:
|
||||
host:
|
||||
- riscv64
|
||||
- ppc64le
|
||||
- aarch64-4k
|
||||
- aarch64-16k
|
||||
- ubuntu@riscv64.techaro.lol
|
||||
- ci@ppc64le.techaro.lol
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-tags: true
|
||||
fetch-depth: 0
|
||||
@@ -35,9 +33,9 @@ jobs:
|
||||
name: id_rsa
|
||||
known_hosts: ${{ secrets.CI_SSH_KNOWN_HOSTS }}
|
||||
|
||||
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
|
||||
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
with:
|
||||
go-version: '1.25.4'
|
||||
go-version: stable
|
||||
|
||||
- name: Run CI
|
||||
run: go run ./utils/cmd/backoff-retry bash test/ssh-ci/rigging.sh ${{ matrix.host }}
|
||||
|
||||
6
.github/workflows/zizmor.yml
vendored
6
.github/workflows/zizmor.yml
vendored
@@ -16,12 +16,12 @@ jobs:
|
||||
security-events: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
- name: Install the latest version of uv
|
||||
uses: astral-sh/setup-uv@1e862dfacbd1d6d858c55d9b792c756523627244 # v7.1.4
|
||||
uses: astral-sh/setup-uv@7edac99f961f18b581bbd960d59d049f04c0002f # v6.4.1
|
||||
|
||||
- name: Run zizmor 🌈
|
||||
run: uvx zizmor --format sarif . > results.sarif
|
||||
@@ -29,7 +29,7 @@ jobs:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Upload SARIF file
|
||||
uses: github/codeql-action/upload-sarif@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
|
||||
uses: github/codeql-action/upload-sarif@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
|
||||
with:
|
||||
sarif_file: results.sarif
|
||||
category: zizmor
|
||||
|
||||
3
.vscode/extensions.json
vendored
3
.vscode/extensions.json
vendored
@@ -6,6 +6,7 @@
|
||||
"unifiedjs.vscode-mdx",
|
||||
"a-h.templ",
|
||||
"redhat.vscode-yaml",
|
||||
"streetsidesoftware.code-spell-checker"
|
||||
"hashicorp.hcl",
|
||||
"fredwangwang.vscode-hcl-format"
|
||||
]
|
||||
}
|
||||
@@ -66,7 +66,7 @@ Anubis is a bit of a nuclear response. This will result in your website being bl
|
||||
|
||||
In most cases, you should not need this and can probably get by using Cloudflare to protect a given origin. However, for circumstances where you can't or won't use Cloudflare, Anubis is there for you.
|
||||
|
||||
If you want to try this out, visit the Anubis documentation site at [anubis.techaro.lol](https://anubis.techaro.lol).
|
||||
If you want to try this out, connect to [anubis.techaro.lol](https://anubis.techaro.lol).
|
||||
|
||||
## Support
|
||||
|
||||
|
||||
13
SECURITY.md
13
SECURITY.md
@@ -1,13 +0,0 @@
|
||||
# Security Policy
|
||||
|
||||
Techaro follows the [Semver 2.0 scheme](https://semver.org/).
|
||||
|
||||
## Supported Versions
|
||||
|
||||
Techaro strives to support the two most recent minor versions of Anubis. Patches to those versions will be published as patch releases.
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
Email security@techaro.lol with details on the vulnerability and reproduction steps. You will get a response as soon as possible.
|
||||
|
||||
Please take care to send your email as a mixed plaintext and HTML message. Messages with GPG signatures or that are plaintext only may be blocked by the spam filter.
|
||||
@@ -11,7 +11,7 @@ var Version = "devel"
|
||||
|
||||
// CookieName is the name of the cookie that Anubis uses in order to validate
|
||||
// access.
|
||||
var CookieName = "techaro.lol-anubis"
|
||||
var CookieName = "techaro.lol-anubis-auth"
|
||||
|
||||
// TestCookieName is the name of the cookie that Anubis uses in order to check
|
||||
// if cookies are enabled on the client's browser.
|
||||
@@ -23,9 +23,6 @@ const CookieDefaultExpirationTime = 7 * 24 * time.Hour
|
||||
// BasePrefix is a global prefix for all Anubis endpoints. Can be emptied to remove the prefix entirely.
|
||||
var BasePrefix = ""
|
||||
|
||||
// PublicUrl is the externally accessible URL for this Anubis instance.
|
||||
var PublicUrl = ""
|
||||
|
||||
// StaticPath is the location where all static Anubis assets are located.
|
||||
const StaticPath = "/.within.website/x/cmd/anubis/"
|
||||
|
||||
@@ -39,6 +36,3 @@ const DefaultDifficulty = 4
|
||||
// ForcedLanguage is the language being used instead of the one of the request's Accept-Language header
|
||||
// if being set.
|
||||
var ForcedLanguage = ""
|
||||
|
||||
// UseSimplifiedExplanation can be set to true for using the simplified explanation
|
||||
var UseSimplifiedExplanation = false
|
||||
|
||||
@@ -30,10 +30,10 @@ import (
|
||||
"github.com/TecharoHQ/anubis"
|
||||
"github.com/TecharoHQ/anubis/data"
|
||||
"github.com/TecharoHQ/anubis/internal"
|
||||
"github.com/TecharoHQ/anubis/internal/thoth"
|
||||
libanubis "github.com/TecharoHQ/anubis/lib"
|
||||
"github.com/TecharoHQ/anubis/lib/config"
|
||||
botPolicy "github.com/TecharoHQ/anubis/lib/policy"
|
||||
"github.com/TecharoHQ/anubis/lib/thoth"
|
||||
"github.com/TecharoHQ/anubis/lib/policy/config"
|
||||
"github.com/TecharoHQ/anubis/web"
|
||||
"github.com/facebookgo/flagenv"
|
||||
_ "github.com/joho/godotenv/autoload"
|
||||
@@ -49,14 +49,11 @@ var (
|
||||
cookieDomain = flag.String("cookie-domain", "", "if set, the top-level domain that the Anubis cookie will be valid for")
|
||||
cookieDynamicDomain = flag.Bool("cookie-dynamic-domain", false, "if set, automatically set the cookie Domain value based on the request domain")
|
||||
cookieExpiration = flag.Duration("cookie-expiration-time", anubis.CookieDefaultExpirationTime, "The amount of time the authorization cookie is valid for")
|
||||
cookiePrefix = flag.String("cookie-prefix", anubis.CookieName, "prefix for browser cookies created by Anubis")
|
||||
cookiePrefix = flag.String("cookie-prefix", "techaro.lol-anubis", "prefix for browser cookies created by Anubis")
|
||||
cookiePartitioned = flag.Bool("cookie-partitioned", false, "if true, sets the partitioned flag on Anubis cookies, enabling CHIPS support")
|
||||
difficultyInJWT = flag.Bool("difficulty-in-jwt", false, "if true, adds a difficulty field in the JWT claims")
|
||||
useSimplifiedExplanation = flag.Bool("use-simplified-explanation", false, "if true, replaces the text when clicking \"Why am I seeing this?\" with a more simplified text for a non-tech-savvy audience.")
|
||||
forcedLanguage = flag.String("forced-language", "", "if set, this language is being used instead of the one from the request's Accept-Language header")
|
||||
hs512Secret = flag.String("hs512-secret", "", "secret used to sign JWTs, uses ed25519 if not set")
|
||||
cookieSecure = flag.Bool("cookie-secure", true, "if true, sets the secure flag on Anubis cookies")
|
||||
cookieSameSite = flag.String("cookie-same-site", "None", "sets the same site option on Anubis cookies, will auto-downgrade None to Lax if cookie-secure is false. Valid values are None, Lax, Strict, and Default.")
|
||||
ed25519PrivateKeyHex = flag.String("ed25519-private-key-hex", "", "private key used to sign JWTs, if not set a random one will be assigned")
|
||||
ed25519PrivateKeyHexFile = flag.String("ed25519-private-key-hex-file", "", "file name containing value for ed25519-private-key-hex")
|
||||
metricsBind = flag.String("metrics-bind", ":9090", "network address to bind metrics to")
|
||||
@@ -68,10 +65,9 @@ var (
|
||||
slogLevel = flag.String("slog-level", "INFO", "logging level (see https://pkg.go.dev/log/slog#hdr-Levels)")
|
||||
stripBasePrefix = flag.Bool("strip-base-prefix", false, "if true, strips the base prefix from requests forwarded to the target server")
|
||||
target = flag.String("target", "http://localhost:3923", "target to reverse proxy to, set to an empty string to disable proxying when only using auth request")
|
||||
targetSNI = flag.String("target-sni", "", "if set, TLS handshake hostname when forwarding requests to the target, if set to auto, use Host header")
|
||||
targetSNI = flag.String("target-sni", "", "if set, the value of the TLS handshake hostname when forwarding requests to the target")
|
||||
targetHost = flag.String("target-host", "", "if set, the value of the Host header when forwarding requests to the target")
|
||||
targetInsecureSkipVerify = flag.Bool("target-insecure-skip-verify", false, "if true, skips TLS validation for the backend")
|
||||
targetDisableKeepAlive = flag.Bool("target-disable-keepalive", false, "if true, disables HTTP keep-alive for the backend")
|
||||
healthcheck = flag.Bool("healthcheck", false, "run a health check against Anubis")
|
||||
useRemoteAddress = flag.Bool("use-remote-address", false, "read the client's IP address from the network request, useful for debugging and running Anubis on bare metal")
|
||||
debugBenchmarkJS = flag.Bool("debug-benchmark-js", false, "respond to every request with a challenge for benchmarking hashrate")
|
||||
@@ -81,14 +77,11 @@ var (
|
||||
extractResources = flag.String("extract-resources", "", "if set, extract the static resources to the specified folder")
|
||||
webmasterEmail = flag.String("webmaster-email", "", "if set, displays webmaster's email on the reject page for appeals")
|
||||
versionFlag = flag.Bool("version", false, "print Anubis version")
|
||||
publicUrl = flag.String("public-url", "", "the externally accessible URL for this Anubis instance, used for constructing redirect URLs (e.g., for forwardAuth).")
|
||||
xffStripPrivate = flag.Bool("xff-strip-private", true, "if set, strip private addresses from X-Forwarded-For")
|
||||
customRealIPHeader = flag.String("custom-real-ip-header", "", "if set, read remote IP from header of this name (in case your environment doesn't set X-Real-IP header)")
|
||||
|
||||
thothInsecure = flag.Bool("thoth-insecure", false, "if set, connect to Thoth over plain HTTP/2, don't enable this unless support told you to")
|
||||
thothURL = flag.String("thoth-url", "", "if set, URL for Thoth, the IP reputation database for Anubis")
|
||||
thothToken = flag.String("thoth-token", "", "if set, API token for Thoth, the IP reputation database for Anubis")
|
||||
jwtRestrictionHeader = flag.String("jwt-restriction-header", "X-Real-IP", "If set, the JWT is only valid if the current value of this header matched the value when the JWT was created")
|
||||
thothInsecure = flag.Bool("thoth-insecure", false, "if set, connect to Thoth over plain HTTP/2, don't enable this unless support told you to")
|
||||
thothURL = flag.String("thoth-url", "", "if set, URL for Thoth, the IP reputation database for Anubis")
|
||||
thothToken = flag.String("thoth-token", "", "if set, API token for Thoth, the IP reputation database for Anubis")
|
||||
)
|
||||
|
||||
func keyFromHex(value string) (ed25519.PrivateKey, error) {
|
||||
@@ -145,22 +138,6 @@ func parseBindNetFromAddr(address string) (string, string) {
|
||||
return "", address
|
||||
}
|
||||
|
||||
func parseSameSite(s string) http.SameSite {
|
||||
switch strings.ToLower(s) {
|
||||
case "none":
|
||||
return http.SameSiteNoneMode
|
||||
case "lax":
|
||||
return http.SameSiteLaxMode
|
||||
case "strict":
|
||||
return http.SameSiteStrictMode
|
||||
case "default":
|
||||
return http.SameSiteDefaultMode
|
||||
default:
|
||||
log.Fatalf("invalid cookie same-site mode: %s, valid values are None, Lax, Strict, and Default", s)
|
||||
}
|
||||
return http.SameSiteDefaultMode
|
||||
}
|
||||
|
||||
func setupListener(network string, address string) (net.Listener, string) {
|
||||
formattedAddress := ""
|
||||
|
||||
@@ -208,7 +185,7 @@ func setupListener(network string, address string) (net.Listener, string) {
|
||||
return listener, formattedAddress
|
||||
}
|
||||
|
||||
func makeReverseProxy(target string, targetSNI string, targetHost string, insecureSkipVerify bool, targetDisableKeepAlive bool) (http.Handler, error) {
|
||||
func makeReverseProxy(target string, targetSNI string, targetHost string, insecureSkipVerify bool) (http.Handler, error) {
|
||||
targetUri, err := url.Parse(target)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse target URL: %w", err)
|
||||
@@ -216,10 +193,6 @@ func makeReverseProxy(target string, targetSNI string, targetHost string, insecu
|
||||
|
||||
transport := http.DefaultTransport.(*http.Transport).Clone()
|
||||
|
||||
if targetDisableKeepAlive {
|
||||
transport.DisableKeepAlives = true
|
||||
}
|
||||
|
||||
// https://github.com/oauth2-proxy/oauth2-proxy/blob/4e2100a2879ef06aea1411790327019c1a09217c/pkg/upstream/http.go#L124
|
||||
if targetUri.Scheme == "unix" {
|
||||
// clean path up so we don't use the socket path in proxied requests
|
||||
@@ -236,28 +209,23 @@ func makeReverseProxy(target string, targetSNI string, targetHost string, insecu
|
||||
|
||||
if insecureSkipVerify || targetSNI != "" {
|
||||
transport.TLSClientConfig = &tls.Config{}
|
||||
}
|
||||
if insecureSkipVerify {
|
||||
slog.Warn("TARGET_INSECURE_SKIP_VERIFY is set to true, TLS certificate validation will not be performed", "target", target)
|
||||
transport.TLSClientConfig.InsecureSkipVerify = true
|
||||
}
|
||||
if targetSNI != "" && targetSNI != "auto" {
|
||||
transport.TLSClientConfig.ServerName = targetSNI
|
||||
if insecureSkipVerify {
|
||||
slog.Warn("TARGET_INSECURE_SKIP_VERIFY is set to true, TLS certificate validation will not be performed", "target", target)
|
||||
transport.TLSClientConfig.InsecureSkipVerify = true
|
||||
}
|
||||
if targetSNI != "" {
|
||||
transport.TLSClientConfig.ServerName = targetSNI
|
||||
}
|
||||
}
|
||||
|
||||
rp := httputil.NewSingleHostReverseProxy(targetUri)
|
||||
rp.Transport = transport
|
||||
|
||||
if targetHost != "" || targetSNI == "auto" {
|
||||
if targetHost != "" {
|
||||
originalDirector := rp.Director
|
||||
rp.Director = func(req *http.Request) {
|
||||
originalDirector(req)
|
||||
if targetHost != "" {
|
||||
req.Host = targetHost
|
||||
}
|
||||
if targetSNI == "auto" {
|
||||
transport.TLSClientConfig.ServerName = req.Host
|
||||
}
|
||||
req.Host = targetHost
|
||||
}
|
||||
}
|
||||
|
||||
@@ -273,11 +241,9 @@ func main() {
|
||||
return
|
||||
}
|
||||
|
||||
internal.InitSlog(*slogLevel)
|
||||
internal.SetHealth("anubis", healthv1.HealthCheckResponse_NOT_SERVING)
|
||||
|
||||
lg := internal.InitSlog(*slogLevel, os.Stderr)
|
||||
lg.Info("starting up Anubis")
|
||||
|
||||
if *healthcheck {
|
||||
log.Println("running healthcheck")
|
||||
if err := doHealthCheck(); err != nil {
|
||||
@@ -305,14 +271,14 @@ func main() {
|
||||
|
||||
if *metricsBind != "" {
|
||||
wg.Add(1)
|
||||
go metricsServer(ctx, *lg.With("subsystem", "metrics"), wg.Done)
|
||||
go metricsServer(ctx, wg.Done)
|
||||
}
|
||||
|
||||
var rp http.Handler
|
||||
// when using anubis via Systemd and environment variables, then it is not possible to set targe to an empty string but only to space
|
||||
if strings.TrimSpace(*target) != "" {
|
||||
var err error
|
||||
rp, err = makeReverseProxy(*target, *targetSNI, *targetHost, *targetInsecureSkipVerify, *targetDisableKeepAlive)
|
||||
rp, err = makeReverseProxy(*target, *targetSNI, *targetHost, *targetInsecureSkipVerify)
|
||||
if err != nil {
|
||||
log.Fatalf("can't make reverse proxy: %v", err)
|
||||
}
|
||||
@@ -325,11 +291,11 @@ func main() {
|
||||
// Thoth configuration
|
||||
switch {
|
||||
case *thothURL != "" && *thothToken == "":
|
||||
lg.Warn("THOTH_URL is set but no THOTH_TOKEN is set")
|
||||
slog.Warn("THOTH_URL is set but no THOTH_TOKEN is set")
|
||||
case *thothURL == "" && *thothToken != "":
|
||||
lg.Warn("THOTH_TOKEN is set but no THOTH_URL is set")
|
||||
slog.Warn("THOTH_TOKEN is set but no THOTH_URL is set")
|
||||
case *thothURL != "" && *thothToken != "":
|
||||
lg.Debug("connecting to Thoth")
|
||||
slog.Debug("connecting to Thoth")
|
||||
thothClient, err := thoth.New(ctx, *thothURL, *thothToken, *thothInsecure)
|
||||
if err != nil {
|
||||
log.Fatalf("can't dial thoth at %s: %v", *thothURL, err)
|
||||
@@ -338,24 +304,10 @@ func main() {
|
||||
ctx = thoth.With(ctx, thothClient)
|
||||
}
|
||||
|
||||
lg.Info("loading policy file", "fname", *policyFname)
|
||||
policy, err := libanubis.LoadPoliciesOrDefault(ctx, *policyFname, *challengeDifficulty, *slogLevel)
|
||||
policy, err := libanubis.LoadPoliciesOrDefault(ctx, *policyFname, *challengeDifficulty)
|
||||
if err != nil {
|
||||
log.Fatalf("can't parse policy file: %v", err)
|
||||
}
|
||||
lg = policy.Logger
|
||||
lg.Debug("swapped to new logger")
|
||||
slog.SetDefault(lg)
|
||||
|
||||
// Warn if persistent storage is used without a configured signing key
|
||||
if policy.Store.IsPersistent() {
|
||||
if *hs512Secret == "" && *ed25519PrivateKeyHex == "" && *ed25519PrivateKeyHexFile == "" {
|
||||
lg.Warn("[misconfiguration] persistent storage backend is configured, but no private key is set. " +
|
||||
"Challenges will be invalidated when Anubis restarts. " +
|
||||
"Set HS512_SECRET, ED25519_PRIVATE_KEY_HEX, or ED25519_PRIVATE_KEY_HEX_FILE to ensure challenges survive service restarts. " +
|
||||
"See: https://anubis.techaro.lol/docs/admin/installation#key-generation")
|
||||
}
|
||||
}
|
||||
|
||||
ruleErrorIDs := make(map[string]string)
|
||||
for _, rule := range policy.Bots {
|
||||
@@ -413,7 +365,7 @@ func main() {
|
||||
log.Fatalf("failed to generate ed25519 key: %v", err)
|
||||
}
|
||||
|
||||
lg.Warn("generating random key, Anubis will have strange behavior when multiple instances are behind the same load balancer target, for more information: see https://anubis.techaro.lol/docs/admin/installation#key-generation")
|
||||
slog.Warn("generating random key, Anubis will have strange behavior when multiple instances are behind the same load balancer target, for more information: see https://anubis.techaro.lol/docs/admin/installation#key-generation")
|
||||
}
|
||||
|
||||
var redirectDomainsList []string
|
||||
@@ -427,13 +379,12 @@ func main() {
|
||||
redirectDomainsList = append(redirectDomainsList, strings.TrimSpace(domain))
|
||||
}
|
||||
} else {
|
||||
lg.Warn("REDIRECT_DOMAINS is not set, Anubis will only redirect to the same domain a request is coming from, see https://anubis.techaro.lol/docs/admin/configuration/redirect-domains")
|
||||
slog.Warn("REDIRECT_DOMAINS is not set, Anubis will only redirect to the same domain a request is coming from, see https://anubis.techaro.lol/docs/admin/configuration/redirect-domains")
|
||||
}
|
||||
|
||||
anubis.CookieName = *cookiePrefix + "-auth"
|
||||
anubis.TestCookieName = *cookiePrefix + "-cookie-verification"
|
||||
anubis.ForcedLanguage = *forcedLanguage
|
||||
anubis.UseSimplifiedExplanation = *useSimplifiedExplanation
|
||||
|
||||
// If OpenGraph configuration values are not set in the config file, use the
|
||||
// values from flags / envvars.
|
||||
@@ -445,30 +396,22 @@ func main() {
|
||||
}
|
||||
|
||||
s, err := libanubis.New(libanubis.Options{
|
||||
BasePrefix: *basePrefix,
|
||||
StripBasePrefix: *stripBasePrefix,
|
||||
Next: rp,
|
||||
Policy: policy,
|
||||
TargetHost: *targetHost,
|
||||
TargetSNI: *targetSNI,
|
||||
TargetInsecureSkipVerify: *targetInsecureSkipVerify,
|
||||
ServeRobotsTXT: *robotsTxt,
|
||||
ED25519PrivateKey: ed25519Priv,
|
||||
HS512Secret: []byte(*hs512Secret),
|
||||
CookieDomain: *cookieDomain,
|
||||
CookieDynamicDomain: *cookieDynamicDomain,
|
||||
CookieExpiration: *cookieExpiration,
|
||||
CookiePartitioned: *cookiePartitioned,
|
||||
RedirectDomains: redirectDomainsList,
|
||||
Target: *target,
|
||||
WebmasterEmail: *webmasterEmail,
|
||||
OpenGraph: policy.OpenGraph,
|
||||
CookieSecure: *cookieSecure,
|
||||
CookieSameSite: parseSameSite(*cookieSameSite),
|
||||
PublicUrl: *publicUrl,
|
||||
JWTRestrictionHeader: *jwtRestrictionHeader,
|
||||
Logger: policy.Logger.With("subsystem", "anubis"),
|
||||
DifficultyInJWT: *difficultyInJWT,
|
||||
BasePrefix: *basePrefix,
|
||||
StripBasePrefix: *stripBasePrefix,
|
||||
Next: rp,
|
||||
Policy: policy,
|
||||
ServeRobotsTXT: *robotsTxt,
|
||||
ED25519PrivateKey: ed25519Priv,
|
||||
HS512Secret: []byte(*hs512Secret),
|
||||
CookieDomain: *cookieDomain,
|
||||
CookieDynamicDomain: *cookieDynamicDomain,
|
||||
CookieExpiration: *cookieExpiration,
|
||||
CookiePartitioned: *cookiePartitioned,
|
||||
RedirectDomains: redirectDomainsList,
|
||||
Target: *target,
|
||||
WebmasterEmail: *webmasterEmail,
|
||||
OpenGraph: policy.OpenGraph,
|
||||
CookieSecure: *cookieSecure,
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatalf("can't construct libanubis.Server: %v", err)
|
||||
@@ -476,7 +419,6 @@ func main() {
|
||||
|
||||
var h http.Handler
|
||||
h = s
|
||||
h = internal.CustomRealIPHeader(*customRealIPHeader, h)
|
||||
h = internal.RemoteXRealIP(*useRemoteAddress, *bindNetwork, h)
|
||||
h = internal.XForwardedForToXRealIP(h)
|
||||
h = internal.XForwardedForUpdate(*xffStripPrivate, h)
|
||||
@@ -484,7 +426,7 @@ func main() {
|
||||
|
||||
srv := http.Server{Handler: h, ErrorLog: internal.GetFilteredHTTPLogger()}
|
||||
listener, listenerUrl := setupListener(*bindNetwork, *bind)
|
||||
lg.Info(
|
||||
slog.Info(
|
||||
"listening",
|
||||
"url", listenerUrl,
|
||||
"difficulty", *challengeDifficulty,
|
||||
@@ -498,7 +440,6 @@ func main() {
|
||||
"base-prefix", *basePrefix,
|
||||
"cookie-expiration-time", *cookieExpiration,
|
||||
"rule-error-ids", ruleErrorIDs,
|
||||
"public-url", *publicUrl,
|
||||
)
|
||||
|
||||
go func() {
|
||||
@@ -518,7 +459,7 @@ func main() {
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
func metricsServer(ctx context.Context, lg slog.Logger, done func()) {
|
||||
func metricsServer(ctx context.Context, done func()) {
|
||||
defer done()
|
||||
|
||||
mux := http.NewServeMux()
|
||||
@@ -544,7 +485,7 @@ func metricsServer(ctx context.Context, lg slog.Logger, done func()) {
|
||||
|
||||
srv := http.Server{Handler: mux, ErrorLog: internal.GetFilteredHTTPLogger()}
|
||||
listener, metricsUrl := setupListener(*metricsBindNetwork, *metricsBind)
|
||||
lg.Debug("listening for metrics", "url", metricsUrl)
|
||||
slog.Debug("listening for metrics", "url", metricsUrl)
|
||||
|
||||
go func() {
|
||||
<-ctx.Done()
|
||||
|
||||
@@ -28,7 +28,7 @@ func main() {
|
||||
flagenv.Parse()
|
||||
flag.Parse()
|
||||
|
||||
slog.SetDefault(internal.InitSlog(*slogLevel, os.Stderr))
|
||||
internal.InitSlog(*slogLevel)
|
||||
|
||||
koDockerRepo := strings.TrimSuffix(*dockerRepo, "/"+filepath.Base(*dockerRepo))
|
||||
|
||||
@@ -46,11 +46,6 @@ func main() {
|
||||
)
|
||||
}
|
||||
|
||||
if strings.Contains(*dockerTags, ",") {
|
||||
newTags := strings.Join(strings.Split(*dockerTags, ","), "\n")
|
||||
dockerTags = &newTags
|
||||
}
|
||||
|
||||
setOutput("docker_image", strings.SplitN(*dockerTags, "\n", 2)[0])
|
||||
|
||||
version, err := run("git describe --tags --always --dirty")
|
||||
|
||||
39
cmd/osiris/internal/config/bind.go
Normal file
39
cmd/osiris/internal/config/bind.go
Normal file
@@ -0,0 +1,39 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrInvalidHostpost = errors.New("bind: invalid host:port")
|
||||
)
|
||||
|
||||
type Bind struct {
|
||||
HTTP string `hcl:"http"`
|
||||
HTTPS string `hcl:"https"`
|
||||
Metrics string `hcl:"metrics"`
|
||||
}
|
||||
|
||||
func (b *Bind) Valid() error {
|
||||
var errs []error
|
||||
|
||||
if _, _, err := net.SplitHostPort(b.HTTP); err != nil {
|
||||
errs = append(errs, fmt.Errorf("%w %q: %w", ErrInvalidHostpost, b.HTTP, err))
|
||||
}
|
||||
|
||||
if _, _, err := net.SplitHostPort(b.HTTPS); err != nil {
|
||||
errs = append(errs, fmt.Errorf("%w %q: %w", ErrInvalidHostpost, b.HTTPS, err))
|
||||
}
|
||||
|
||||
if _, _, err := net.SplitHostPort(b.Metrics); err != nil {
|
||||
errs = append(errs, fmt.Errorf("%w %q: %w", ErrInvalidHostpost, b.Metrics, err))
|
||||
}
|
||||
|
||||
if len(errs) != 0 {
|
||||
return errors.Join(errs...)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
55
cmd/osiris/internal/config/bind_test.go
Normal file
55
cmd/osiris/internal/config/bind_test.go
Normal file
@@ -0,0 +1,55 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestBindValid(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
precondition func(t *testing.T)
|
||||
bind Bind
|
||||
err error
|
||||
}{
|
||||
{
|
||||
name: "basic",
|
||||
precondition: nil,
|
||||
bind: Bind{
|
||||
HTTP: ":8081",
|
||||
HTTPS: ":8082",
|
||||
Metrics: ":8083",
|
||||
},
|
||||
err: nil,
|
||||
},
|
||||
{
|
||||
name: "invalid ports",
|
||||
precondition: func(t *testing.T) {
|
||||
ln, err := net.Listen("tcp", ":8081")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Cleanup(func() { ln.Close() })
|
||||
},
|
||||
bind: Bind{
|
||||
HTTP: "",
|
||||
HTTPS: "",
|
||||
Metrics: "",
|
||||
},
|
||||
err: ErrInvalidHostpost,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if tt.precondition != nil {
|
||||
tt.precondition(t)
|
||||
}
|
||||
|
||||
if err := tt.bind.Valid(); !errors.Is(err, tt.err) {
|
||||
t.Logf("want: %v", tt.err)
|
||||
t.Logf("got: %v", err)
|
||||
t.Error("got wrong error from validation function")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
31
cmd/osiris/internal/config/config.go
Normal file
31
cmd/osiris/internal/config/config.go
Normal file
@@ -0,0 +1,31 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
type Toplevel struct {
|
||||
Bind Bind `hcl:"bind,block"`
|
||||
Domains []Domain `hcl:"domain,block"`
|
||||
}
|
||||
|
||||
func (t *Toplevel) Valid() error {
|
||||
var errs []error
|
||||
|
||||
if err := t.Bind.Valid(); err != nil {
|
||||
errs = append(errs, fmt.Errorf("invalid bind block:\n%w", err))
|
||||
}
|
||||
|
||||
for _, d := range t.Domains {
|
||||
if err := d.Valid(); err != nil {
|
||||
errs = append(errs, fmt.Errorf("when parsing domain %s: %w", d.Name, err))
|
||||
}
|
||||
}
|
||||
|
||||
if len(errs) != 0 {
|
||||
return fmt.Errorf("invalid configuration file:\n%w", errors.Join(errs...))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
66
cmd/osiris/internal/config/domain.go
Normal file
66
cmd/osiris/internal/config/domain.go
Normal file
@@ -0,0 +1,66 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/url"
|
||||
|
||||
"golang.org/x/net/idna"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrInvalidDomainName = errors.New("domain: name is invalid")
|
||||
ErrInvalidDomainTLSConfig = errors.New("domain: TLS config is invalid")
|
||||
ErrInvalidURL = errors.New("invalid URL")
|
||||
ErrInvalidURLScheme = errors.New("URL has invalid scheme")
|
||||
)
|
||||
|
||||
type Domain struct {
|
||||
Name string `hcl:"name,label"`
|
||||
TLS TLS `hcl:"tls,block"`
|
||||
Target string `hcl:"target"`
|
||||
InsecureSkipVerify bool `hcl:"insecure_skip_verify,optional"`
|
||||
HealthTarget string `hcl:"health_target"`
|
||||
}
|
||||
|
||||
func (d Domain) Valid() error {
|
||||
var errs []error
|
||||
|
||||
if _, err := idna.Lookup.ToASCII(d.Name); err != nil {
|
||||
errs = append(errs, fmt.Errorf("%w %q: %w", ErrInvalidDomainName, d.Name, err))
|
||||
}
|
||||
|
||||
if err := d.TLS.Valid(); err != nil {
|
||||
errs = append(errs, fmt.Errorf("%w: %w", ErrInvalidDomainTLSConfig, err))
|
||||
}
|
||||
|
||||
if err := isURLValid(d.Target); err != nil {
|
||||
errs = append(errs, fmt.Errorf("target has %w %q: %w", ErrInvalidURL, d.Target, err))
|
||||
}
|
||||
|
||||
if err := isURLValid(d.HealthTarget); err != nil {
|
||||
errs = append(errs, fmt.Errorf("health_target has %w %q: %w", ErrInvalidURL, d.HealthTarget, err))
|
||||
}
|
||||
|
||||
if len(errs) != 0 {
|
||||
return errors.Join(errs...)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func isURLValid(input string) error {
|
||||
u, err := url.Parse(input)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
switch u.Scheme {
|
||||
case "http", "https", "h2c", "unix":
|
||||
// do nothing
|
||||
default:
|
||||
return fmt.Errorf("%w %s has scheme %s (want http, https, h2c, unix)", ErrInvalidURLScheme, input, u.Scheme)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
89
cmd/osiris/internal/config/domain_test.go
Normal file
89
cmd/osiris/internal/config/domain_test.go
Normal file
@@ -0,0 +1,89 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestDomainValid(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
input Domain
|
||||
err error
|
||||
}{
|
||||
{
|
||||
name: "simple happy path",
|
||||
input: Domain{
|
||||
Name: "anubis.techaro.lol",
|
||||
TLS: TLS{
|
||||
Cert: "./testdata/tls/selfsigned.crt",
|
||||
Key: "./testdata/tls/selfsigned.key",
|
||||
},
|
||||
Target: "http://localhost:3000",
|
||||
HealthTarget: "http://localhost:9091/healthz",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "invalid domain name",
|
||||
input: Domain{
|
||||
Name: "\uFFFD.techaro.lol",
|
||||
TLS: TLS{
|
||||
Cert: "./testdata/tls/selfsigned.crt",
|
||||
Key: "./testdata/tls/selfsigned.key",
|
||||
},
|
||||
Target: "http://localhost:3000",
|
||||
HealthTarget: "http://localhost:9091/healthz",
|
||||
},
|
||||
err: ErrInvalidDomainName,
|
||||
},
|
||||
{
|
||||
name: "invalid tls config",
|
||||
input: Domain{
|
||||
Name: "anubis.techaro.lol",
|
||||
TLS: TLS{
|
||||
Cert: "./testdata/tls/invalid.crt",
|
||||
Key: "./testdata/tls/invalid.key",
|
||||
},
|
||||
Target: "http://localhost:3000",
|
||||
HealthTarget: "http://localhost:9091/healthz",
|
||||
},
|
||||
err: ErrInvalidDomainTLSConfig,
|
||||
},
|
||||
{
|
||||
name: "invalid URL",
|
||||
input: Domain{
|
||||
Name: "anubis.techaro.lol",
|
||||
TLS: TLS{
|
||||
Cert: "./testdata/tls/selfsigned.crt",
|
||||
Key: "./testdata/tls/selfsigned.key",
|
||||
},
|
||||
Target: "file://[::1:3000",
|
||||
HealthTarget: "file://[::1:9091/healthz",
|
||||
},
|
||||
err: ErrInvalidURL,
|
||||
},
|
||||
{
|
||||
name: "wrong URL scheme",
|
||||
input: Domain{
|
||||
Name: "anubis.techaro.lol",
|
||||
TLS: TLS{
|
||||
Cert: "./testdata/tls/selfsigned.crt",
|
||||
Key: "./testdata/tls/selfsigned.key",
|
||||
},
|
||||
Target: "file://localhost:3000",
|
||||
HealthTarget: "file://localhost:9091/healthz",
|
||||
},
|
||||
err: ErrInvalidURLScheme,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if err := tt.input.Valid(); !errors.Is(err, tt.err) {
|
||||
t.Logf("want: %v", tt.err)
|
||||
t.Logf("got: %v", err)
|
||||
t.Error("got wrong error from validation function")
|
||||
} else {
|
||||
t.Log(err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
1
cmd/osiris/internal/config/testdata/tls/invalid.crt
vendored
Normal file
1
cmd/osiris/internal/config/testdata/tls/invalid.crt
vendored
Normal file
@@ -0,0 +1 @@
|
||||
aorsentaeiorsntoiearnstoieanrsoietnaioresntoeiar
|
||||
1
cmd/osiris/internal/config/testdata/tls/invalid.key
vendored
Normal file
1
cmd/osiris/internal/config/testdata/tls/invalid.key
vendored
Normal file
@@ -0,0 +1 @@
|
||||
aorsentaeiorsntoiearnstoieanrsoietnaioresntoeiar
|
||||
11
cmd/osiris/internal/config/testdata/tls/selfsigned.crt
vendored
Normal file
11
cmd/osiris/internal/config/testdata/tls/selfsigned.crt
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIBnzCCAVGgAwIBAgIUAw8funCpiB3ZAAPoWdSCWnzbsFIwBQYDK2VwMEUxCzAJ
|
||||
BgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5l
|
||||
dCBXaWRnaXRzIFB0eSBMdGQwHhcNMjUwNzE4MTkwMjM1WhcNMjUwODE3MTkwMjM1
|
||||
WjBFMQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwY
|
||||
SW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMCowBQYDK2VwAyEAcXDHXV3vgpvjtTaz
|
||||
s0Oj/73rMr06bhyGGhleYS1MNoWjUzBRMB0GA1UdDgQWBBQwmfKPthucFHB6Wfgz
|
||||
2Nj5nkMQOjAfBgNVHSMEGDAWgBQwmfKPthucFHB6Wfgz2Nj5nkMQOjAPBgNVHRMB
|
||||
Af8EBTADAQH/MAUGAytlcANBALBYbULlGwB7Ro0UTgUoQDNxEvayn3qzVFHIt7lC
|
||||
/2/NzNBkk4yPT+a4mbRuydxLkv+JIvmQbarZxpksYnWlCAM=
|
||||
-----END CERTIFICATE-----
|
||||
3
cmd/osiris/internal/config/testdata/tls/selfsigned.key
vendored
Normal file
3
cmd/osiris/internal/config/testdata/tls/selfsigned.key
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
-----BEGIN PRIVATE KEY-----
|
||||
MC4CAQAwBQYDK2VwBCIEIOHKoX22Mha6SnnpLm34fSSfTUDbRiDCi6N1nOgTOlds
|
||||
-----END PRIVATE KEY-----
|
||||
40
cmd/osiris/internal/config/tls.go
Normal file
40
cmd/osiris/internal/config/tls.go
Normal file
@@ -0,0 +1,40 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrCantReadTLS = errors.New("tls: can't read TLS")
|
||||
ErrInvalidTLSKeypair = errors.New("tls: can't parse TLS keypair")
|
||||
)
|
||||
|
||||
type TLS struct {
|
||||
Cert string `hcl:"cert"`
|
||||
Key string `hcl:"key"`
|
||||
}
|
||||
|
||||
func (t TLS) Valid() error {
|
||||
var errs []error
|
||||
|
||||
if _, err := os.Stat(t.Cert); err != nil {
|
||||
errs = append(errs, fmt.Errorf("%w certificate %s: %w", ErrCantReadTLS, t.Cert, err))
|
||||
}
|
||||
|
||||
if _, err := os.Stat(t.Key); err != nil {
|
||||
errs = append(errs, fmt.Errorf("%w key %s: %w", ErrCantReadTLS, t.Key, err))
|
||||
}
|
||||
|
||||
if _, err := tls.LoadX509KeyPair(t.Cert, t.Key); err != nil {
|
||||
errs = append(errs, fmt.Errorf("%w (%s, %s): %w", ErrInvalidTLSKeypair, t.Cert, t.Key, err))
|
||||
}
|
||||
|
||||
if len(errs) != 0 {
|
||||
return errors.Join(errs...)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
48
cmd/osiris/internal/config/tls_test.go
Normal file
48
cmd/osiris/internal/config/tls_test.go
Normal file
@@ -0,0 +1,48 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestTLSValid(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
input TLS
|
||||
err error
|
||||
}{
|
||||
{
|
||||
name: "simple selfsigned",
|
||||
input: TLS{
|
||||
Cert: "./testdata/tls/selfsigned.crt",
|
||||
Key: "./testdata/tls/selfsigned.key",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "files don't exist",
|
||||
input: TLS{
|
||||
Cert: "./testdata/tls/nonexistent.crt",
|
||||
Key: "./testdata/tls/nonexistent.key",
|
||||
},
|
||||
err: ErrCantReadTLS,
|
||||
},
|
||||
{
|
||||
name: "invalid keypair",
|
||||
input: TLS{
|
||||
Cert: "./testdata/tls/invalid.crt",
|
||||
Key: "./testdata/tls/invalid.key",
|
||||
},
|
||||
err: ErrInvalidTLSKeypair,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if err := tt.input.Valid(); !errors.Is(err, tt.err) {
|
||||
t.Logf("want: %v", tt.err)
|
||||
t.Logf("got: %v", err)
|
||||
t.Error("got wrong error from validation function")
|
||||
} else {
|
||||
t.Log(err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
85
cmd/osiris/internal/entrypoint/entrypoint.go
Normal file
85
cmd/osiris/internal/entrypoint/entrypoint.go
Normal file
@@ -0,0 +1,85 @@
|
||||
package entrypoint
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
"net"
|
||||
|
||||
"github.com/TecharoHQ/anubis/cmd/osiris/internal/config"
|
||||
"github.com/TecharoHQ/anubis/internal"
|
||||
"github.com/hashicorp/hcl/v2/hclsimple"
|
||||
"golang.org/x/sync/errgroup"
|
||||
healthv1 "google.golang.org/grpc/health/grpc_health_v1"
|
||||
)
|
||||
|
||||
type Options struct {
|
||||
ConfigFname string
|
||||
}
|
||||
|
||||
func Main(ctx context.Context, opts Options) error {
|
||||
internal.SetHealth("osiris", healthv1.HealthCheckResponse_NOT_SERVING)
|
||||
|
||||
var cfg config.Toplevel
|
||||
if err := hclsimple.DecodeFile(opts.ConfigFname, nil, &cfg); err != nil {
|
||||
return fmt.Errorf("can't read configuration file %s:\n\n%w", opts.ConfigFname, err)
|
||||
}
|
||||
|
||||
if err := cfg.Valid(); err != nil {
|
||||
return fmt.Errorf("configuration file %s is invalid:\n\n%w", opts.ConfigFname, err)
|
||||
}
|
||||
|
||||
rtr, err := NewRouter(cfg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
rtr.opts = opts
|
||||
go rtr.backgroundReloadConfig(ctx)
|
||||
|
||||
g, gCtx := errgroup.WithContext(ctx)
|
||||
|
||||
// HTTP
|
||||
g.Go(func() error {
|
||||
ln, err := net.Listen("tcp", cfg.Bind.HTTP)
|
||||
if err != nil {
|
||||
return fmt.Errorf("(HTTP) can't bind to tcp %s: %w", cfg.Bind.HTTP, err)
|
||||
}
|
||||
defer ln.Close()
|
||||
|
||||
go func(ctx context.Context) {
|
||||
<-ctx.Done()
|
||||
ln.Close()
|
||||
}(ctx)
|
||||
|
||||
slog.Info("listening", "for", "http", "bind", cfg.Bind.HTTP)
|
||||
|
||||
return rtr.HandleHTTP(gCtx, ln)
|
||||
})
|
||||
|
||||
// HTTPS
|
||||
g.Go(func() error {
|
||||
ln, err := net.Listen("tcp", cfg.Bind.HTTPS)
|
||||
if err != nil {
|
||||
return fmt.Errorf("(https) can't bind to tcp %s: %w", cfg.Bind.HTTPS, err)
|
||||
}
|
||||
defer ln.Close()
|
||||
|
||||
go func(ctx context.Context) {
|
||||
<-ctx.Done()
|
||||
ln.Close()
|
||||
}(ctx)
|
||||
|
||||
slog.Info("listening", "for", "https", "bind", cfg.Bind.HTTPS)
|
||||
|
||||
return rtr.HandleHTTPS(gCtx, ln)
|
||||
})
|
||||
|
||||
// Metrics
|
||||
g.Go(func() error {
|
||||
return rtr.ListenAndServeMetrics(gCtx, cfg.Bind.Metrics)
|
||||
})
|
||||
|
||||
internal.SetHealth("osiris", healthv1.HealthCheckResponse_SERVING)
|
||||
|
||||
return g.Wait()
|
||||
}
|
||||
93
cmd/osiris/internal/entrypoint/entrypoint_test.go
Normal file
93
cmd/osiris/internal/entrypoint/entrypoint_test.go
Normal file
@@ -0,0 +1,93 @@
|
||||
package entrypoint
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestMainGoodConfig(t *testing.T) {
|
||||
files, err := os.ReadDir("./testdata/good")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
for _, st := range files {
|
||||
t.Run(st.Name(), func(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
cfg := loadConfig(t, filepath.Join("testdata", "good", st.Name()))
|
||||
|
||||
go func(ctx context.Context) {
|
||||
if err := Main(ctx, Options{
|
||||
ConfigFname: filepath.Join("testdata", "good", st.Name()),
|
||||
}); err != nil {
|
||||
var netOpErr *net.OpError
|
||||
switch {
|
||||
case errors.Is(err, context.Canceled):
|
||||
// Context was canceled, this is expected
|
||||
return
|
||||
case errors.As(err, &netOpErr):
|
||||
// Network operation error occurred
|
||||
t.Logf("Network operation error: %v", netOpErr)
|
||||
return
|
||||
case errors.Is(err, http.ErrServerClosed):
|
||||
// Server was closed, this is expected
|
||||
return
|
||||
default:
|
||||
// Other unexpected error
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}(ctx)
|
||||
|
||||
wait := 5 * time.Millisecond
|
||||
|
||||
for i := range make([]struct{}, 10) {
|
||||
if i != 0 {
|
||||
time.Sleep(wait)
|
||||
wait = wait * 2
|
||||
}
|
||||
|
||||
t.Logf("try %d (wait=%s)", i+1, wait)
|
||||
|
||||
resp, err := http.Get("http://localhost" + cfg.Bind.Metrics + "/readyz")
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
continue
|
||||
}
|
||||
|
||||
cancel()
|
||||
return
|
||||
}
|
||||
|
||||
t.Fatal("router initialization did not work")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestMainBadConfig(t *testing.T) {
|
||||
files, err := os.ReadDir("./testdata/bad")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
for _, st := range files {
|
||||
t.Run(st.Name(), func(t *testing.T) {
|
||||
if err := Main(t.Context(), Options{
|
||||
ConfigFname: filepath.Join("testdata", "bad", st.Name()),
|
||||
}); err == nil {
|
||||
t.Error("wanted an error but got none")
|
||||
} else {
|
||||
t.Log(err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
35
cmd/osiris/internal/entrypoint/h2c.go
Normal file
35
cmd/osiris/internal/entrypoint/h2c.go
Normal file
@@ -0,0 +1,35 @@
|
||||
package entrypoint
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httputil"
|
||||
"net/url"
|
||||
|
||||
"golang.org/x/net/http2"
|
||||
)
|
||||
|
||||
func newH2CReverseProxy(target *url.URL) *httputil.ReverseProxy {
|
||||
target.Scheme = "http"
|
||||
|
||||
director := func(req *http.Request) {
|
||||
req.URL.Scheme = target.Scheme
|
||||
req.URL.Host = target.Host
|
||||
req.Host = target.Host
|
||||
}
|
||||
|
||||
// Use h2c transport
|
||||
transport := &http2.Transport{
|
||||
AllowHTTP: true,
|
||||
DialTLS: func(network, addr string, cfg *tls.Config) (net.Conn, error) {
|
||||
// Just do plain TCP (h2c)
|
||||
return net.Dial(network, addr)
|
||||
},
|
||||
}
|
||||
|
||||
return &httputil.ReverseProxy{
|
||||
Director: director,
|
||||
Transport: transport,
|
||||
}
|
||||
}
|
||||
51
cmd/osiris/internal/entrypoint/h2c_test.go
Normal file
51
cmd/osiris/internal/entrypoint/h2c_test.go
Normal file
@@ -0,0 +1,51 @@
|
||||
package entrypoint
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"net/url"
|
||||
"testing"
|
||||
|
||||
"golang.org/x/net/http2"
|
||||
"golang.org/x/net/http2/h2c"
|
||||
)
|
||||
|
||||
func newH2cServer(t *testing.T, h http.Handler) *httptest.Server {
|
||||
t.Helper()
|
||||
|
||||
h2s := &http2.Server{}
|
||||
|
||||
srv := httptest.NewServer(h2c.NewHandler(h, h2s))
|
||||
t.Cleanup(func() {
|
||||
srv.Close()
|
||||
})
|
||||
|
||||
return srv
|
||||
}
|
||||
|
||||
func TestH2CReverseProxy(t *testing.T) {
|
||||
h := &ackHandler{}
|
||||
|
||||
srv := newH2cServer(t, h)
|
||||
|
||||
u, err := url.Parse(srv.URL)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
rp := httptest.NewServer(newH2CReverseProxy(u))
|
||||
defer rp.Close()
|
||||
|
||||
resp, err := rp.Client().Get(rp.URL)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
t.Errorf("wrong status code from reverse proxy: %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
if !h.ack {
|
||||
t.Error("h2c handler was not executed")
|
||||
}
|
||||
}
|
||||
72
cmd/osiris/internal/entrypoint/metrics.go
Normal file
72
cmd/osiris/internal/entrypoint/metrics.go
Normal file
@@ -0,0 +1,72 @@
|
||||
package entrypoint
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
"net/http"
|
||||
"sort"
|
||||
|
||||
"github.com/TecharoHQ/anubis/internal"
|
||||
healthv1 "google.golang.org/grpc/health/grpc_health_v1"
|
||||
)
|
||||
|
||||
func healthz(w http.ResponseWriter, r *http.Request) {
|
||||
services, err := internal.HealthSrv.List(r.Context(), nil)
|
||||
if err != nil {
|
||||
slog.Error("can't get list of services", "err", err)
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
var keys []string
|
||||
for k := range services.Statuses {
|
||||
if k == "" {
|
||||
continue
|
||||
}
|
||||
keys = append(keys, k)
|
||||
}
|
||||
|
||||
sort.Strings(keys)
|
||||
|
||||
var msg bytes.Buffer
|
||||
|
||||
var healthy bool = true
|
||||
|
||||
for _, k := range keys {
|
||||
st := services.Statuses[k].GetStatus()
|
||||
fmt.Fprintf(&msg, "%s: %s\n", k, st)
|
||||
switch st {
|
||||
case healthv1.HealthCheckResponse_SERVING:
|
||||
// do nothing
|
||||
default:
|
||||
healthy = false
|
||||
}
|
||||
}
|
||||
|
||||
if !healthy {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
}
|
||||
|
||||
w.Write(msg.Bytes())
|
||||
}
|
||||
|
||||
func readyz(w http.ResponseWriter, r *http.Request) {
|
||||
st, ok := internal.GetHealth("osiris")
|
||||
if !ok {
|
||||
slog.Error("health service osiris does not exist, file a bug")
|
||||
http.Error(w, "health service osiris does not exist", http.StatusExpectationFailed)
|
||||
}
|
||||
|
||||
switch st {
|
||||
case healthv1.HealthCheckResponse_NOT_SERVING:
|
||||
http.Error(w, "NOT OK", http.StatusInternalServerError)
|
||||
return
|
||||
case healthv1.HealthCheckResponse_SERVING:
|
||||
fmt.Fprintln(w, "OK")
|
||||
return
|
||||
default:
|
||||
http.Error(w, "UNKNOWN", http.StatusFailedDependency)
|
||||
return
|
||||
}
|
||||
}
|
||||
66
cmd/osiris/internal/entrypoint/metrics_test.go
Normal file
66
cmd/osiris/internal/entrypoint/metrics_test.go
Normal file
@@ -0,0 +1,66 @@
|
||||
package entrypoint
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/TecharoHQ/anubis/internal"
|
||||
healthv1 "google.golang.org/grpc/health/grpc_health_v1"
|
||||
)
|
||||
|
||||
func TestHealthz(t *testing.T) {
|
||||
srv := httptest.NewServer(http.HandlerFunc(healthz))
|
||||
|
||||
internal.SetHealth("osiris", healthv1.HealthCheckResponse_NOT_SERVING)
|
||||
|
||||
resp, err := srv.Client().Get(srv.URL)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode == http.StatusOK {
|
||||
t.Errorf("wanted not ready but got %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
internal.SetHealth("osiris", healthv1.HealthCheckResponse_SERVING)
|
||||
|
||||
resp, err = srv.Client().Get(srv.URL)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
t.Errorf("wanted ready but got %d", resp.StatusCode)
|
||||
}
|
||||
}
|
||||
|
||||
func TestReadyz(t *testing.T) {
|
||||
srv := httptest.NewServer(http.HandlerFunc(readyz))
|
||||
|
||||
internal.SetHealth("osiris", healthv1.HealthCheckResponse_NOT_SERVING)
|
||||
|
||||
resp, err := srv.Client().Get(srv.URL)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode == http.StatusOK {
|
||||
t.Errorf("wanted not ready but got %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
internal.SetHealth("osiris", healthv1.HealthCheckResponse_SERVING)
|
||||
|
||||
resp, err = srv.Client().Get(srv.URL)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
t.Errorf("wanted ready but got %d", resp.StatusCode)
|
||||
}
|
||||
}
|
||||
320
cmd/osiris/internal/entrypoint/router.go
Normal file
320
cmd/osiris/internal/entrypoint/router.go
Normal file
@@ -0,0 +1,320 @@
|
||||
package entrypoint
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httputil"
|
||||
"net/url"
|
||||
"os"
|
||||
"os/signal"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/TecharoHQ/anubis/cmd/osiris/internal/config"
|
||||
"github.com/TecharoHQ/anubis/internal"
|
||||
"github.com/TecharoHQ/anubis/internal/fingerprint"
|
||||
"github.com/felixge/httpsnoop"
|
||||
"github.com/hashicorp/hcl/v2/hclsimple"
|
||||
"github.com/lum8rjack/go-ja4h"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promauto"
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrTargetInvalid = errors.New("[unexpected] target invalid")
|
||||
ErrNoHandler = errors.New("[unexpected] no handler for domain")
|
||||
ErrInvalidTLSKeypair = errors.New("[unexpected] invalid TLS keypair")
|
||||
ErrNoCert = errors.New("this server does not have a certificate for that domain")
|
||||
|
||||
requestsPerDomain = promauto.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Namespace: "techaro",
|
||||
Subsystem: "osiris",
|
||||
Name: "request_count",
|
||||
}, []string{"domain", "method", "response_code"})
|
||||
|
||||
responseTime = promauto.NewHistogramVec(prometheus.HistogramOpts{
|
||||
Namespace: "techaro",
|
||||
Subsystem: "osiris",
|
||||
Name: "response_time",
|
||||
}, []string{"domain"})
|
||||
|
||||
unresolvedRequests = promauto.NewGauge(prometheus.GaugeOpts{
|
||||
Namespace: "techaro",
|
||||
Subsystem: "osiris",
|
||||
Name: "unresolved_requests",
|
||||
})
|
||||
)
|
||||
|
||||
type Router struct {
|
||||
lock sync.RWMutex
|
||||
routes map[string]http.Handler
|
||||
tlsCerts map[string]*tls.Certificate
|
||||
opts Options
|
||||
}
|
||||
|
||||
func (rtr *Router) setConfig(c config.Toplevel) error {
|
||||
var errs []error
|
||||
newMap := map[string]http.Handler{}
|
||||
newCerts := map[string]*tls.Certificate{}
|
||||
|
||||
for _, d := range c.Domains {
|
||||
var domainErrs []error
|
||||
|
||||
u, err := url.Parse(d.Target)
|
||||
if err != nil {
|
||||
domainErrs = append(domainErrs, fmt.Errorf("%w %q: %v", ErrTargetInvalid, d.Target, err))
|
||||
}
|
||||
|
||||
var h http.Handler
|
||||
|
||||
if u != nil {
|
||||
switch u.Scheme {
|
||||
case "http", "https":
|
||||
rp := httputil.NewSingleHostReverseProxy(u)
|
||||
|
||||
if d.InsecureSkipVerify {
|
||||
rp.Transport = &http.Transport{
|
||||
TLSClientConfig: &tls.Config{
|
||||
InsecureSkipVerify: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
h = rp
|
||||
case "h2c":
|
||||
h = newH2CReverseProxy(u)
|
||||
case "unix":
|
||||
h = &httputil.ReverseProxy{
|
||||
Director: func(r *http.Request) {
|
||||
r.URL.Scheme = "http"
|
||||
r.URL.Host = d.Name
|
||||
r.Host = d.Name
|
||||
},
|
||||
Transport: &http.Transport{
|
||||
DialContext: func(_ context.Context, _, _ string) (net.Conn, error) {
|
||||
return net.Dial("unix", strings.TrimPrefix(d.Target, "unix://"))
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if h == nil {
|
||||
domainErrs = append(domainErrs, ErrNoHandler)
|
||||
}
|
||||
|
||||
newMap[d.Name] = h
|
||||
|
||||
cert, err := tls.LoadX509KeyPair(d.TLS.Cert, d.TLS.Key)
|
||||
if err != nil {
|
||||
domainErrs = append(domainErrs, fmt.Errorf("%w: %w", ErrInvalidTLSKeypair, err))
|
||||
}
|
||||
|
||||
newCerts[d.Name] = &cert
|
||||
|
||||
if len(domainErrs) != 0 {
|
||||
errs = append(errs, fmt.Errorf("invalid domain %s: %w", d.Name, errors.Join(domainErrs...)))
|
||||
}
|
||||
}
|
||||
|
||||
if len(errs) != 0 {
|
||||
return fmt.Errorf("can't compile config to routing map: %w", errors.Join(errs...))
|
||||
}
|
||||
|
||||
rtr.lock.Lock()
|
||||
rtr.routes = newMap
|
||||
rtr.tlsCerts = newCerts
|
||||
rtr.lock.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (rtr *Router) GetCertificate(hello *tls.ClientHelloInfo) (*tls.Certificate, error) {
|
||||
rtr.lock.RLock()
|
||||
cert, ok := rtr.tlsCerts[hello.ServerName]
|
||||
rtr.lock.RUnlock()
|
||||
|
||||
if !ok {
|
||||
return nil, ErrNoCert
|
||||
}
|
||||
|
||||
return cert, nil
|
||||
}
|
||||
|
||||
func (rtr *Router) loadConfig() error {
|
||||
slog.Info("reloading config", "fname", rtr.opts.ConfigFname)
|
||||
var cfg config.Toplevel
|
||||
if err := hclsimple.DecodeFile(rtr.opts.ConfigFname, nil, &cfg); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := cfg.Valid(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := rtr.setConfig(cfg); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
slog.Info("done!")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (rtr *Router) backgroundReloadConfig(ctx context.Context) {
|
||||
t := time.NewTicker(time.Hour)
|
||||
defer t.Stop()
|
||||
ch := make(chan os.Signal, 1)
|
||||
signal.Notify(ch, syscall.SIGHUP)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-t.C:
|
||||
if err := rtr.loadConfig(); err != nil {
|
||||
slog.Error("can't reload config", "fname", rtr.opts.ConfigFname, "err", err)
|
||||
}
|
||||
case <-ch:
|
||||
if err := rtr.loadConfig(); err != nil {
|
||||
slog.Error("can't reload config", "fname", rtr.opts.ConfigFname, "err", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func NewRouter(c config.Toplevel) (*Router, error) {
|
||||
result := &Router{
|
||||
routes: map[string]http.Handler{},
|
||||
}
|
||||
|
||||
if err := result.setConfig(c); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func (rtr *Router) HandleHTTP(ctx context.Context, ln net.Listener) error {
|
||||
srv := http.Server{
|
||||
Handler: rtr,
|
||||
ErrorLog: internal.GetFilteredHTTPLogger(),
|
||||
}
|
||||
|
||||
go func(ctx context.Context) {
|
||||
<-ctx.Done()
|
||||
srv.Close()
|
||||
}(ctx)
|
||||
|
||||
return srv.Serve(ln)
|
||||
}
|
||||
|
||||
func (rtr *Router) HandleHTTPS(ctx context.Context, ln net.Listener) error {
|
||||
tc := &tls.Config{
|
||||
GetCertificate: rtr.GetCertificate,
|
||||
}
|
||||
|
||||
srv := &http.Server{
|
||||
Handler: rtr,
|
||||
ErrorLog: internal.GetFilteredHTTPLogger(),
|
||||
TLSConfig: tc,
|
||||
}
|
||||
|
||||
go func(ctx context.Context) {
|
||||
<-ctx.Done()
|
||||
srv.Close()
|
||||
}(ctx)
|
||||
|
||||
fingerprint.ApplyTLSFingerprinter(srv)
|
||||
|
||||
return srv.ServeTLS(ln, "", "")
|
||||
}
|
||||
|
||||
func (rtr *Router) ListenAndServeMetrics(ctx context.Context, addr string) error {
|
||||
ln, err := net.Listen("tcp", addr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("(metrics) can't bind to tcp %s: %w", addr, err)
|
||||
}
|
||||
defer ln.Close()
|
||||
|
||||
go func(ctx context.Context) {
|
||||
<-ctx.Done()
|
||||
ln.Close()
|
||||
}(ctx)
|
||||
|
||||
mux := http.NewServeMux()
|
||||
|
||||
mux.Handle("/metrics", promhttp.Handler())
|
||||
mux.HandleFunc("/readyz", readyz)
|
||||
mux.HandleFunc("/healthz", healthz)
|
||||
|
||||
slog.Info("listening", "for", "metrics", "bind", addr)
|
||||
|
||||
srv := http.Server{
|
||||
Addr: addr,
|
||||
Handler: mux,
|
||||
ErrorLog: internal.GetFilteredHTTPLogger(),
|
||||
}
|
||||
|
||||
go func(ctx context.Context) {
|
||||
<-ctx.Done()
|
||||
srv.Close()
|
||||
}(ctx)
|
||||
|
||||
return srv.Serve(ln)
|
||||
}
|
||||
|
||||
func (rtr *Router) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
var host = r.Host
|
||||
|
||||
if strings.Contains(host, ":") {
|
||||
host, _, _ = net.SplitHostPort(host)
|
||||
}
|
||||
|
||||
var h http.Handler
|
||||
var ok bool
|
||||
|
||||
ja4hFP := ja4h.JA4H(r)
|
||||
|
||||
slog.Info("got request", "method", r.Method, "host", host, "path", r.URL.Path)
|
||||
|
||||
rtr.lock.RLock()
|
||||
h, ok = rtr.routes[host]
|
||||
rtr.lock.RUnlock()
|
||||
|
||||
if !ok {
|
||||
unresolvedRequests.Inc()
|
||||
http.NotFound(w, r) // TODO(Xe): brand this
|
||||
return
|
||||
}
|
||||
|
||||
r.Header.Set("X-Http-Ja4h-Fingerprint", ja4hFP)
|
||||
|
||||
if fp := fingerprint.GetTLSFingerprint(r); fp != nil {
|
||||
if ja3n := fp.JA3N(); ja3n != nil {
|
||||
r.Header.Set("X-Tls-Ja3n-Fingerprint", ja3n.String())
|
||||
}
|
||||
if ja4 := fp.JA4(); ja4 != nil {
|
||||
r.Header.Set("X-Tls-Ja4-Fingerprint", ja4.String())
|
||||
}
|
||||
}
|
||||
|
||||
if tcpFP := fingerprint.GetTCPFingerprint(r); tcpFP != nil {
|
||||
r.Header.Set("X-Tcp-Ja4t-Fingerprint", tcpFP.String())
|
||||
}
|
||||
|
||||
m := httpsnoop.CaptureMetrics(h, w, r)
|
||||
|
||||
requestsPerDomain.WithLabelValues(host, r.Method, fmt.Sprint(m.Code)).Inc()
|
||||
responseTime.WithLabelValues(host).Observe(float64(m.Duration.Milliseconds()))
|
||||
|
||||
slog.Info("request completed", "host", host, "method", r.Method, "response_code", m.Code, "duration_ms", m.Duration.Milliseconds())
|
||||
}
|
||||
319
cmd/osiris/internal/entrypoint/router_test.go
Normal file
319
cmd/osiris/internal/entrypoint/router_test.go
Normal file
@@ -0,0 +1,319 @@
|
||||
package entrypoint
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/TecharoHQ/anubis/cmd/osiris/internal/config"
|
||||
"github.com/hashicorp/hcl/v2/hclsimple"
|
||||
)
|
||||
|
||||
func loadConfig(t *testing.T, fname string) config.Toplevel {
|
||||
t.Helper()
|
||||
|
||||
var cfg config.Toplevel
|
||||
if err := hclsimple.DecodeFile(fname, nil, &cfg); err != nil {
|
||||
t.Fatalf("can't read configuration file %s: %v", fname, err)
|
||||
}
|
||||
|
||||
if err := cfg.Valid(); err != nil {
|
||||
t.Errorf("configuration file %s is invalid: %v", "./testdata/selfsigned.hcl", err)
|
||||
}
|
||||
|
||||
return cfg
|
||||
}
|
||||
|
||||
func newRouter(t *testing.T, cfg config.Toplevel) *Router {
|
||||
t.Helper()
|
||||
|
||||
rtr, err := NewRouter(cfg)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
return rtr
|
||||
}
|
||||
|
||||
func TestNewRouter(t *testing.T) {
|
||||
cfg := loadConfig(t, "./testdata/good/selfsigned.hcl")
|
||||
rtr := newRouter(t, cfg)
|
||||
|
||||
srv := httptest.NewServer(rtr)
|
||||
defer srv.Close()
|
||||
}
|
||||
|
||||
func TestNewRouterFails(t *testing.T) {
|
||||
cfg := loadConfig(t, "./testdata/good/selfsigned.hcl")
|
||||
|
||||
cfg.Domains = append(cfg.Domains, config.Domain{
|
||||
Name: "test1.internal",
|
||||
TLS: config.TLS{
|
||||
Cert: "./testdata/tls/invalid.crt",
|
||||
Key: "./testdata/tls/invalid.key",
|
||||
},
|
||||
Target: cfg.Domains[0].Target,
|
||||
HealthTarget: cfg.Domains[0].HealthTarget,
|
||||
})
|
||||
|
||||
rtr, err := NewRouter(cfg)
|
||||
if err == nil {
|
||||
t.Fatal("wanted an error but got none")
|
||||
}
|
||||
|
||||
srv := httptest.NewServer(rtr)
|
||||
defer srv.Close()
|
||||
}
|
||||
|
||||
func TestRouterSetConfig(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
configFname string
|
||||
mutation func(cfg config.Toplevel) config.Toplevel
|
||||
err error
|
||||
}{
|
||||
{
|
||||
name: "basic",
|
||||
configFname: "./testdata/good/selfsigned.hcl",
|
||||
mutation: func(cfg config.Toplevel) config.Toplevel {
|
||||
return cfg
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "all schemes",
|
||||
configFname: "./testdata/good/selfsigned.hcl",
|
||||
mutation: func(cfg config.Toplevel) config.Toplevel {
|
||||
cfg.Domains = append(cfg.Domains, config.Domain{
|
||||
Name: "http.internal",
|
||||
TLS: cfg.Domains[0].TLS,
|
||||
Target: "http://[::1]:3000",
|
||||
HealthTarget: cfg.Domains[0].HealthTarget,
|
||||
})
|
||||
cfg.Domains = append(cfg.Domains, config.Domain{
|
||||
Name: "https.internal",
|
||||
TLS: cfg.Domains[0].TLS,
|
||||
Target: "https://[::1]:3000",
|
||||
HealthTarget: cfg.Domains[0].HealthTarget,
|
||||
})
|
||||
cfg.Domains = append(cfg.Domains, config.Domain{
|
||||
Name: "h2c.internal",
|
||||
TLS: cfg.Domains[0].TLS,
|
||||
Target: "h2c://[::1]:3000",
|
||||
HealthTarget: cfg.Domains[0].HealthTarget,
|
||||
})
|
||||
cfg.Domains = append(cfg.Domains, config.Domain{
|
||||
Name: "unix.internal",
|
||||
TLS: cfg.Domains[0].TLS,
|
||||
Target: "unix://foo.sock",
|
||||
HealthTarget: cfg.Domains[0].HealthTarget,
|
||||
})
|
||||
|
||||
return cfg
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "invalid TLS",
|
||||
configFname: "./testdata/good/selfsigned.hcl",
|
||||
mutation: func(cfg config.Toplevel) config.Toplevel {
|
||||
cfg.Domains = append(cfg.Domains, config.Domain{
|
||||
Name: "test1.internal",
|
||||
TLS: config.TLS{
|
||||
Cert: "./testdata/tls/invalid.crt",
|
||||
Key: "./testdata/tls/invalid.key",
|
||||
},
|
||||
Target: cfg.Domains[0].Target,
|
||||
HealthTarget: cfg.Domains[0].HealthTarget,
|
||||
})
|
||||
|
||||
return cfg
|
||||
},
|
||||
err: ErrInvalidTLSKeypair,
|
||||
},
|
||||
{
|
||||
name: "target is not a valid URL",
|
||||
configFname: "./testdata/good/selfsigned.hcl",
|
||||
mutation: func(cfg config.Toplevel) config.Toplevel {
|
||||
cfg.Domains = append(cfg.Domains, config.Domain{
|
||||
Name: "test1.internal",
|
||||
TLS: cfg.Domains[0].TLS,
|
||||
Target: "http://[::1:443",
|
||||
HealthTarget: cfg.Domains[0].HealthTarget,
|
||||
})
|
||||
|
||||
return cfg
|
||||
},
|
||||
err: ErrTargetInvalid,
|
||||
},
|
||||
{
|
||||
name: "invalid target scheme",
|
||||
configFname: "./testdata/good/selfsigned.hcl",
|
||||
mutation: func(cfg config.Toplevel) config.Toplevel {
|
||||
cfg.Domains = append(cfg.Domains, config.Domain{
|
||||
Name: "test1.internal",
|
||||
TLS: cfg.Domains[0].TLS,
|
||||
Target: "foo://",
|
||||
HealthTarget: cfg.Domains[0].HealthTarget,
|
||||
})
|
||||
|
||||
return cfg
|
||||
},
|
||||
err: ErrNoHandler,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
cfg := loadConfig(t, tt.configFname)
|
||||
rtr := newRouter(t, cfg)
|
||||
|
||||
cfg = tt.mutation(cfg)
|
||||
|
||||
if err := rtr.setConfig(cfg); !errors.Is(err, tt.err) {
|
||||
t.Logf("want: %v", tt.err)
|
||||
t.Logf("got: %v", err)
|
||||
t.Error("got wrong error from rtr.setConfig function")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
type ackHandler struct {
|
||||
ack bool
|
||||
}
|
||||
|
||||
func (ah *ackHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
ah.ack = true
|
||||
fmt.Fprintln(w, "OK")
|
||||
}
|
||||
|
||||
func (ah *ackHandler) Reset() {
|
||||
ah.ack = false
|
||||
}
|
||||
|
||||
func newUnixServer(t *testing.T, h http.Handler) string {
|
||||
sockName := filepath.Join(t.TempDir(), "s")
|
||||
ln, err := net.Listen("unix", sockName)
|
||||
if err != nil {
|
||||
t.Fatalf("can't listen on %s: %v", sockName, err)
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
ln.Close()
|
||||
os.Remove(sockName)
|
||||
})
|
||||
|
||||
go func(ctx context.Context) {
|
||||
srv := &http.Server{
|
||||
Handler: h,
|
||||
}
|
||||
|
||||
go func() {
|
||||
<-ctx.Done()
|
||||
srv.Close()
|
||||
}()
|
||||
|
||||
srv.Serve(ln)
|
||||
}(t.Context())
|
||||
|
||||
return "unix://" + sockName
|
||||
}
|
||||
|
||||
func TestRouterGetCertificate(t *testing.T) {
|
||||
cfg := loadConfig(t, "./testdata/good/selfsigned.hcl")
|
||||
rtr := newRouter(t, cfg)
|
||||
|
||||
for _, tt := range []struct {
|
||||
domainName string
|
||||
err error
|
||||
}{
|
||||
{
|
||||
domainName: "osiris.local.cetacean.club",
|
||||
},
|
||||
{
|
||||
domainName: "whacky-fun.local",
|
||||
err: ErrNoCert,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.domainName, func(t *testing.T) {
|
||||
if _, err := rtr.GetCertificate(&tls.ClientHelloInfo{ServerName: tt.domainName}); !errors.Is(err, tt.err) {
|
||||
t.Logf("want: %v", tt.err)
|
||||
t.Logf("got: %v", err)
|
||||
t.Error("got wrong error from rtr.GetCertificate")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestRouterServeAllProtocols(t *testing.T) {
|
||||
cfg := loadConfig(t, "./testdata/good/all_protocols.hcl")
|
||||
|
||||
httpAckHandler := &ackHandler{}
|
||||
httpsAckHandler := &ackHandler{}
|
||||
h2cAckHandler := &ackHandler{}
|
||||
unixAckHandler := &ackHandler{}
|
||||
|
||||
httpSrv := httptest.NewServer(httpAckHandler)
|
||||
httpsSrv := httptest.NewTLSServer(httpsAckHandler)
|
||||
h2cSrv := newH2cServer(t, h2cAckHandler)
|
||||
unixPath := newUnixServer(t, unixAckHandler)
|
||||
|
||||
cfg.Domains[0].Target = httpSrv.URL
|
||||
cfg.Domains[1].Target = httpsSrv.URL
|
||||
cfg.Domains[2].Target = strings.ReplaceAll(h2cSrv.URL, "http:", "h2c:")
|
||||
cfg.Domains[3].Target = unixPath
|
||||
|
||||
// enc := json.NewEncoder(os.Stderr)
|
||||
// enc.SetIndent("", " ")
|
||||
// enc.Encode(cfg)
|
||||
|
||||
rtr := newRouter(t, cfg)
|
||||
|
||||
cli := &http.Client{
|
||||
Transport: &http.Transport{
|
||||
TLSClientConfig: &tls.Config{
|
||||
InsecureSkipVerify: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
t.Run("plain http", func(t *testing.T) {
|
||||
ln, err := net.Listen("tcp", ":0")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
ln.Close()
|
||||
})
|
||||
|
||||
go rtr.HandleHTTP(t.Context(), ln)
|
||||
|
||||
serverURL := "http://" + ln.Addr().String()
|
||||
t.Log(serverURL)
|
||||
|
||||
for _, d := range cfg.Domains {
|
||||
t.Run(d.Name, func(t *testing.T) {
|
||||
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, serverURL, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
req.Host = d.Name
|
||||
|
||||
resp, err := cli.Do(req)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("wrong status code %d", resp.StatusCode)
|
||||
}
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
0
cmd/osiris/internal/entrypoint/testdata/bad/empty.hcl
vendored
Normal file
0
cmd/osiris/internal/entrypoint/testdata/bad/empty.hcl
vendored
Normal file
15
cmd/osiris/internal/entrypoint/testdata/bad/invalid.hcl
vendored
Normal file
15
cmd/osiris/internal/entrypoint/testdata/bad/invalid.hcl
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
bind {
|
||||
http = ":65530"
|
||||
https = ":65531"
|
||||
metrics = ":65532"
|
||||
}
|
||||
|
||||
domain "osiris.local.cetacean.club" {
|
||||
tls {
|
||||
cert = "./testdata/invalid.crt"
|
||||
key = "./testdata/invalid.key"
|
||||
}
|
||||
|
||||
target = "http://localhost:3000"
|
||||
health_target = "http://localhost:9091/healthz"
|
||||
}
|
||||
46
cmd/osiris/internal/entrypoint/testdata/good/all_protocols.hcl
vendored
Normal file
46
cmd/osiris/internal/entrypoint/testdata/good/all_protocols.hcl
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
bind {
|
||||
http = ":65520"
|
||||
https = ":65521"
|
||||
metrics = ":65522"
|
||||
}
|
||||
|
||||
domain "http.internal" {
|
||||
tls {
|
||||
cert = "./testdata/selfsigned.crt"
|
||||
key = "./testdata/selfsigned.key"
|
||||
}
|
||||
|
||||
target = "http://localhost:65510" # XXX(Xe) this is overwritten
|
||||
health_target = "http://localhost:9091/healthz"
|
||||
}
|
||||
|
||||
domain "https.internal" {
|
||||
tls {
|
||||
cert = "./testdata/selfsigned.crt"
|
||||
key = "./testdata/selfsigned.key"
|
||||
}
|
||||
|
||||
target = "https://localhost:65511" # XXX(Xe) this is overwritten
|
||||
insecure_skip_verify = true
|
||||
health_target = "http://localhost:9091/healthz"
|
||||
}
|
||||
|
||||
domain "h2c.internal" {
|
||||
tls {
|
||||
cert = "./testdata/selfsigned.crt"
|
||||
key = "./testdata/selfsigned.key"
|
||||
}
|
||||
|
||||
target = "h2c://localhost:65511" # XXX(Xe) this is overwritten
|
||||
health_target = "http://localhost:9091/healthz"
|
||||
}
|
||||
|
||||
domain "unix.internal" {
|
||||
tls {
|
||||
cert = "./testdata/selfsigned.crt"
|
||||
key = "./testdata/selfsigned.key"
|
||||
}
|
||||
|
||||
target = "http://localhost:65511" # XXX(Xe) this is overwritten
|
||||
health_target = "http://localhost:9091/healthz"
|
||||
}
|
||||
15
cmd/osiris/internal/entrypoint/testdata/good/selfsigned.hcl
vendored
Normal file
15
cmd/osiris/internal/entrypoint/testdata/good/selfsigned.hcl
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
bind {
|
||||
http = ":65530"
|
||||
https = ":65531"
|
||||
metrics = ":65532"
|
||||
}
|
||||
|
||||
domain "osiris.local.cetacean.club" {
|
||||
tls {
|
||||
cert = "./testdata/selfsigned.crt"
|
||||
key = "./testdata/selfsigned.key"
|
||||
}
|
||||
|
||||
target = "http://localhost:3000"
|
||||
health_target = "http://localhost:9091/healthz"
|
||||
}
|
||||
11
cmd/osiris/internal/entrypoint/testdata/selfsigned.crt
vendored
Normal file
11
cmd/osiris/internal/entrypoint/testdata/selfsigned.crt
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIBnzCCAVGgAwIBAgIUOLTjSYOjFk00IemtFTC4oEZs988wBQYDK2VwMEUxCzAJ
|
||||
BgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5l
|
||||
dCBXaWRnaXRzIFB0eSBMdGQwHhcNMjUwNzE4MjEyNDIzWhcNMjUwODE3MjEyNDIz
|
||||
WjBFMQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwY
|
||||
SW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMCowBQYDK2VwAyEAPHphABS15+4VV6R1
|
||||
vYzBQYIycQmOmlbA8QcfwzuB2VajUzBRMB0GA1UdDgQWBBT2s+MQ4AR6cbK4V0+d
|
||||
XZnok1orhDAfBgNVHSMEGDAWgBT2s+MQ4AR6cbK4V0+dXZnok1orhDAPBgNVHRMB
|
||||
Af8EBTADAQH/MAUGAytlcANBAOdoJbRMnHmkEETzVtXP+jkAI9yQNRXujnglApGP
|
||||
8I5pvIYVgYCgoQrnb4haVWFldHM1T9H698n19e/egfFb+w4=
|
||||
-----END CERTIFICATE-----
|
||||
3
cmd/osiris/internal/entrypoint/testdata/selfsigned.key
vendored
Normal file
3
cmd/osiris/internal/entrypoint/testdata/selfsigned.key
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
-----BEGIN PRIVATE KEY-----
|
||||
MC4CAQAwBQYDK2VwBCIEIBop42tiZ0yzhaKo9NAc0PlAyBsE8NAE0i9Z7s2lgZuR
|
||||
-----END PRIVATE KEY-----
|
||||
43
cmd/osiris/main.go
Normal file
43
cmd/osiris/main.go
Normal file
@@ -0,0 +1,43 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
|
||||
"github.com/TecharoHQ/anubis"
|
||||
"github.com/TecharoHQ/anubis/cmd/osiris/internal/entrypoint"
|
||||
"github.com/TecharoHQ/anubis/internal"
|
||||
"github.com/facebookgo/flagenv"
|
||||
)
|
||||
|
||||
var (
|
||||
configFname = flag.String("config", "./osiris.hcl", "Configuration file (HCL), see docs")
|
||||
slogLevel = flag.String("slog-level", "INFO", "logging level (see https://pkg.go.dev/log/slog#hdr-Levels)")
|
||||
versionFlag = flag.Bool("version", false, "if true, show version information then quit")
|
||||
)
|
||||
|
||||
func main() {
|
||||
flagenv.Parse()
|
||||
flag.Parse()
|
||||
|
||||
if *versionFlag {
|
||||
fmt.Println("Osiris", anubis.Version)
|
||||
return
|
||||
}
|
||||
|
||||
internal.InitSlog(*slogLevel)
|
||||
|
||||
ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
if err := entrypoint.Main(ctx, entrypoint.Options{
|
||||
ConfigFname: *configFname,
|
||||
}); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "%v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
15
cmd/osiris/osiris.hcl
Normal file
15
cmd/osiris/osiris.hcl
Normal file
@@ -0,0 +1,15 @@
|
||||
bind {
|
||||
http = ":3004"
|
||||
https = ":3005"
|
||||
metrics = ":9091"
|
||||
}
|
||||
|
||||
domain "osiris.local.cetacean.club" {
|
||||
tls {
|
||||
cert = "./internal/config/testdata/tls/selfsigned.crt"
|
||||
key = "./internal/config/testdata/tls/selfsigned.key"
|
||||
}
|
||||
|
||||
target = "http://localhost:3000"
|
||||
health_target = "http://localhost:9091/healthz"
|
||||
}
|
||||
@@ -12,7 +12,7 @@ import (
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/TecharoHQ/anubis/lib/config"
|
||||
"github.com/TecharoHQ/anubis/lib/policy/config"
|
||||
|
||||
"sigs.k8s.io/yaml"
|
||||
)
|
||||
@@ -29,7 +29,7 @@ var (
|
||||
)
|
||||
|
||||
type RobotsRule struct {
|
||||
UserAgents []string
|
||||
UserAgent string
|
||||
Disallows []string
|
||||
Allows []string
|
||||
CrawlDelay int
|
||||
@@ -130,26 +130,10 @@ func main() {
|
||||
}
|
||||
}
|
||||
|
||||
func createRuleFromAccumulated(userAgents, disallows, allows []string, crawlDelay int) RobotsRule {
|
||||
rule := RobotsRule{
|
||||
UserAgents: make([]string, len(userAgents)),
|
||||
Disallows: make([]string, len(disallows)),
|
||||
Allows: make([]string, len(allows)),
|
||||
CrawlDelay: crawlDelay,
|
||||
}
|
||||
copy(rule.UserAgents, userAgents)
|
||||
copy(rule.Disallows, disallows)
|
||||
copy(rule.Allows, allows)
|
||||
return rule
|
||||
}
|
||||
|
||||
func parseRobotsTxt(input io.Reader) ([]RobotsRule, error) {
|
||||
scanner := bufio.NewScanner(input)
|
||||
var rules []RobotsRule
|
||||
var currentUserAgents []string
|
||||
var currentDisallows []string
|
||||
var currentAllows []string
|
||||
var currentCrawlDelay int
|
||||
var currentRule *RobotsRule
|
||||
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
@@ -170,42 +154,38 @@ func parseRobotsTxt(input io.Reader) ([]RobotsRule, error) {
|
||||
|
||||
switch directive {
|
||||
case "user-agent":
|
||||
// If we have accumulated rules with directives and encounter a new user-agent,
|
||||
// flush the current rules
|
||||
if len(currentUserAgents) > 0 && (len(currentDisallows) > 0 || len(currentAllows) > 0 || currentCrawlDelay > 0) {
|
||||
rule := createRuleFromAccumulated(currentUserAgents, currentDisallows, currentAllows, currentCrawlDelay)
|
||||
rules = append(rules, rule)
|
||||
// Reset for next group
|
||||
currentUserAgents = nil
|
||||
currentDisallows = nil
|
||||
currentAllows = nil
|
||||
currentCrawlDelay = 0
|
||||
// Start a new rule section
|
||||
if currentRule != nil {
|
||||
rules = append(rules, *currentRule)
|
||||
}
|
||||
currentRule = &RobotsRule{
|
||||
UserAgent: value,
|
||||
Disallows: make([]string, 0),
|
||||
Allows: make([]string, 0),
|
||||
}
|
||||
currentUserAgents = append(currentUserAgents, value)
|
||||
|
||||
case "disallow":
|
||||
if len(currentUserAgents) > 0 && value != "" {
|
||||
currentDisallows = append(currentDisallows, value)
|
||||
if currentRule != nil && value != "" {
|
||||
currentRule.Disallows = append(currentRule.Disallows, value)
|
||||
}
|
||||
|
||||
case "allow":
|
||||
if len(currentUserAgents) > 0 && value != "" {
|
||||
currentAllows = append(currentAllows, value)
|
||||
if currentRule != nil && value != "" {
|
||||
currentRule.Allows = append(currentRule.Allows, value)
|
||||
}
|
||||
|
||||
case "crawl-delay":
|
||||
if len(currentUserAgents) > 0 {
|
||||
if currentRule != nil {
|
||||
if delay, err := parseIntSafe(value); err == nil {
|
||||
currentCrawlDelay = delay
|
||||
currentRule.CrawlDelay = delay
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Don't forget the last group of rules
|
||||
if len(currentUserAgents) > 0 {
|
||||
rule := createRuleFromAccumulated(currentUserAgents, currentDisallows, currentAllows, currentCrawlDelay)
|
||||
rules = append(rules, rule)
|
||||
// Don't forget the last rule
|
||||
if currentRule != nil {
|
||||
rules = append(rules, *currentRule)
|
||||
}
|
||||
|
||||
// Mark blacklisted user agents (those with "Disallow: /")
|
||||
@@ -231,11 +211,10 @@ func convertToAnubisRules(robotsRules []RobotsRule) []AnubisRule {
|
||||
var anubisRules []AnubisRule
|
||||
ruleCounter := 0
|
||||
|
||||
// Process each robots rule individually
|
||||
for _, robotsRule := range robotsRules {
|
||||
userAgents := robotsRule.UserAgents
|
||||
userAgent := robotsRule.UserAgent
|
||||
|
||||
// Handle crawl delay
|
||||
// Handle crawl delay as weight adjustment (do this first before any continues)
|
||||
if robotsRule.CrawlDelay > 0 && *crawlDelay > 0 {
|
||||
ruleCounter++
|
||||
rule := AnubisRule{
|
||||
@@ -244,32 +223,20 @@ func convertToAnubisRules(robotsRules []RobotsRule) []AnubisRule {
|
||||
Weight: &config.Weight{Adjust: *crawlDelay},
|
||||
}
|
||||
|
||||
if len(userAgents) == 1 && userAgents[0] == "*" {
|
||||
if userAgent == "*" {
|
||||
rule.Expression = &config.ExpressionOrList{
|
||||
All: []string{"true"}, // Always applies
|
||||
}
|
||||
} else if len(userAgents) == 1 {
|
||||
rule.Expression = &config.ExpressionOrList{
|
||||
All: []string{fmt.Sprintf("userAgent.contains(%q)", userAgents[0])},
|
||||
}
|
||||
} else {
|
||||
// Multiple user agents - use any block
|
||||
var expressions []string
|
||||
for _, ua := range userAgents {
|
||||
if ua == "*" {
|
||||
expressions = append(expressions, "true")
|
||||
} else {
|
||||
expressions = append(expressions, fmt.Sprintf("userAgent.contains(%q)", ua))
|
||||
}
|
||||
}
|
||||
rule.Expression = &config.ExpressionOrList{
|
||||
Any: expressions,
|
||||
All: []string{fmt.Sprintf("userAgent.contains(%q)", userAgent)},
|
||||
}
|
||||
}
|
||||
|
||||
anubisRules = append(anubisRules, rule)
|
||||
}
|
||||
|
||||
// Handle blacklisted user agents
|
||||
// Handle blacklisted user agents (complete deny/challenge)
|
||||
if robotsRule.IsBlacklist {
|
||||
ruleCounter++
|
||||
rule := AnubisRule{
|
||||
@@ -277,36 +244,21 @@ func convertToAnubisRules(robotsRules []RobotsRule) []AnubisRule {
|
||||
Action: *userAgentDeny,
|
||||
}
|
||||
|
||||
if len(userAgents) == 1 {
|
||||
userAgent := userAgents[0]
|
||||
if userAgent == "*" {
|
||||
// This would block everything - convert to a weight adjustment instead
|
||||
rule.Name = fmt.Sprintf("%s-global-restriction-%d", *policyName, ruleCounter)
|
||||
rule.Action = "WEIGH"
|
||||
rule.Weight = &config.Weight{Adjust: 20} // Increase difficulty significantly
|
||||
rule.Expression = &config.ExpressionOrList{
|
||||
All: []string{"true"}, // Always applies
|
||||
}
|
||||
} else {
|
||||
rule.Expression = &config.ExpressionOrList{
|
||||
All: []string{fmt.Sprintf("userAgent.contains(%q)", userAgent)},
|
||||
}
|
||||
if userAgent == "*" {
|
||||
// This would block everything - convert to a weight adjustment instead
|
||||
rule.Name = fmt.Sprintf("%s-global-restriction-%d", *policyName, ruleCounter)
|
||||
rule.Action = "WEIGH"
|
||||
rule.Weight = &config.Weight{Adjust: 20} // Increase difficulty significantly
|
||||
rule.Expression = &config.ExpressionOrList{
|
||||
All: []string{"true"}, // Always applies
|
||||
}
|
||||
} else {
|
||||
// Multiple user agents - use any block
|
||||
var expressions []string
|
||||
for _, ua := range userAgents {
|
||||
if ua == "*" {
|
||||
expressions = append(expressions, "true")
|
||||
} else {
|
||||
expressions = append(expressions, fmt.Sprintf("userAgent.contains(%q)", ua))
|
||||
}
|
||||
}
|
||||
rule.Expression = &config.ExpressionOrList{
|
||||
Any: expressions,
|
||||
All: []string{fmt.Sprintf("userAgent.contains(%q)", userAgent)},
|
||||
}
|
||||
}
|
||||
anubisRules = append(anubisRules, rule)
|
||||
continue
|
||||
}
|
||||
|
||||
// Handle specific disallow rules
|
||||
@@ -324,33 +276,9 @@ func convertToAnubisRules(robotsRules []RobotsRule) []AnubisRule {
|
||||
// Build CEL expression
|
||||
var conditions []string
|
||||
|
||||
// Add user agent conditions
|
||||
if len(userAgents) == 1 && userAgents[0] == "*" {
|
||||
// Wildcard user agent - no user agent condition needed
|
||||
} else if len(userAgents) == 1 {
|
||||
conditions = append(conditions, fmt.Sprintf("userAgent.contains(%q)", userAgents[0]))
|
||||
} else {
|
||||
// For multiple user agents, we need to use a more complex expression
|
||||
// This is a limitation - we can't easily combine any for user agents with all for path
|
||||
// So we'll create separate rules for each user agent
|
||||
for _, ua := range userAgents {
|
||||
if ua == "*" {
|
||||
continue // Skip wildcard as it's handled separately
|
||||
}
|
||||
ruleCounter++
|
||||
subRule := AnubisRule{
|
||||
Name: fmt.Sprintf("%s-disallow-%d", *policyName, ruleCounter),
|
||||
Action: *baseAction,
|
||||
Expression: &config.ExpressionOrList{
|
||||
All: []string{
|
||||
fmt.Sprintf("userAgent.contains(%q)", ua),
|
||||
buildPathCondition(disallow),
|
||||
},
|
||||
},
|
||||
}
|
||||
anubisRules = append(anubisRules, subRule)
|
||||
}
|
||||
continue
|
||||
// Add user agent condition if not wildcard
|
||||
if userAgent != "*" {
|
||||
conditions = append(conditions, fmt.Sprintf("userAgent.contains(%q)", userAgent))
|
||||
}
|
||||
|
||||
// Add path condition
|
||||
@@ -363,6 +291,7 @@ func convertToAnubisRules(robotsRules []RobotsRule) []AnubisRule {
|
||||
|
||||
anubisRules = append(anubisRules, rule)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return anubisRules
|
||||
|
||||
@@ -22,9 +22,9 @@ type TestCase struct {
|
||||
type TestOptions struct {
|
||||
format string
|
||||
action string
|
||||
crawlDelayWeight int
|
||||
policyName string
|
||||
deniedAction string
|
||||
crawlDelayWeight int
|
||||
}
|
||||
|
||||
func TestDataFileConversion(t *testing.T) {
|
||||
@@ -78,12 +78,6 @@ func TestDataFileConversion(t *testing.T) {
|
||||
expectedFile: "complex.yaml",
|
||||
options: TestOptions{format: "yaml", crawlDelayWeight: 5},
|
||||
},
|
||||
{
|
||||
name: "consecutive_user_agents",
|
||||
robotsFile: "consecutive.robots.txt",
|
||||
expectedFile: "consecutive.yaml",
|
||||
options: TestOptions{format: "yaml", crawlDelayWeight: 3},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
|
||||
6
cmd/robots2policy/testdata/blacklist.yaml
vendored
6
cmd/robots2policy/testdata/blacklist.yaml
vendored
@@ -25,6 +25,6 @@
|
||||
- action: CHALLENGE
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("Googlebot")
|
||||
- path.startsWith("/search")
|
||||
name: robots-txt-policy-disallow-7
|
||||
- userAgent.contains("Googlebot")
|
||||
- path.startsWith("/search")
|
||||
name: robots-txt-policy-disallow-7
|
||||
24
cmd/robots2policy/testdata/complex.yaml
vendored
24
cmd/robots2policy/testdata/complex.yaml
vendored
@@ -20,8 +20,8 @@
|
||||
- action: CHALLENGE
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("Googlebot")
|
||||
- path.startsWith("/search/")
|
||||
- userAgent.contains("Googlebot")
|
||||
- path.startsWith("/search/")
|
||||
name: robots-txt-policy-disallow-6
|
||||
- action: WEIGH
|
||||
expression: userAgent.contains("Bingbot")
|
||||
@@ -31,14 +31,14 @@
|
||||
- action: CHALLENGE
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("Bingbot")
|
||||
- path.startsWith("/search/")
|
||||
- userAgent.contains("Bingbot")
|
||||
- path.startsWith("/search/")
|
||||
name: robots-txt-policy-disallow-8
|
||||
- action: CHALLENGE
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("Bingbot")
|
||||
- path.startsWith("/admin/")
|
||||
- userAgent.contains("Bingbot")
|
||||
- path.startsWith("/admin/")
|
||||
name: robots-txt-policy-disallow-9
|
||||
- action: DENY
|
||||
expression: userAgent.contains("BadBot")
|
||||
@@ -54,18 +54,18 @@
|
||||
- action: CHALLENGE
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("TestBot")
|
||||
- path.matches("^/.*/admin")
|
||||
- userAgent.contains("TestBot")
|
||||
- path.matches("^/.*/admin")
|
||||
name: robots-txt-policy-disallow-13
|
||||
- action: CHALLENGE
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("TestBot")
|
||||
- path.matches("^/temp.*\\.html")
|
||||
- userAgent.contains("TestBot")
|
||||
- path.matches("^/temp.*\\.html")
|
||||
name: robots-txt-policy-disallow-14
|
||||
- action: CHALLENGE
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("TestBot")
|
||||
- path.matches("^/file.\\.log")
|
||||
- userAgent.contains("TestBot")
|
||||
- path.matches("^/file.\\.log")
|
||||
name: robots-txt-policy-disallow-15
|
||||
|
||||
@@ -1,25 +0,0 @@
|
||||
# Test consecutive user agents that should be grouped into any: blocks
|
||||
User-agent: *
|
||||
Disallow: /admin
|
||||
Crawl-delay: 10
|
||||
|
||||
# Multiple consecutive user agents - should be grouped
|
||||
User-agent: BadBot
|
||||
User-agent: SpamBot
|
||||
User-agent: EvilBot
|
||||
Disallow: /
|
||||
|
||||
# Single user agent - should be separate
|
||||
User-agent: GoodBot
|
||||
Disallow: /private
|
||||
|
||||
# Multiple consecutive user agents with crawl delay
|
||||
User-agent: SlowBot1
|
||||
User-agent: SlowBot2
|
||||
Crawl-delay: 5
|
||||
|
||||
# Multiple consecutive user agents with specific path
|
||||
User-agent: SearchBot1
|
||||
User-agent: SearchBot2
|
||||
User-agent: SearchBot3
|
||||
Disallow: /search
|
||||
47
cmd/robots2policy/testdata/consecutive.yaml
vendored
47
cmd/robots2policy/testdata/consecutive.yaml
vendored
@@ -1,47 +0,0 @@
|
||||
- action: WEIGH
|
||||
expression: "true"
|
||||
name: robots-txt-policy-crawl-delay-1
|
||||
weight:
|
||||
adjust: 3
|
||||
- action: CHALLENGE
|
||||
expression: path.startsWith("/admin")
|
||||
name: robots-txt-policy-disallow-2
|
||||
- action: DENY
|
||||
expression:
|
||||
any:
|
||||
- userAgent.contains("BadBot")
|
||||
- userAgent.contains("SpamBot")
|
||||
- userAgent.contains("EvilBot")
|
||||
name: robots-txt-policy-blacklist-3
|
||||
- action: CHALLENGE
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("GoodBot")
|
||||
- path.startsWith("/private")
|
||||
name: robots-txt-policy-disallow-4
|
||||
- action: WEIGH
|
||||
expression:
|
||||
any:
|
||||
- userAgent.contains("SlowBot1")
|
||||
- userAgent.contains("SlowBot2")
|
||||
name: robots-txt-policy-crawl-delay-5
|
||||
weight:
|
||||
adjust: 3
|
||||
- action: CHALLENGE
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("SearchBot1")
|
||||
- path.startsWith("/search")
|
||||
name: robots-txt-policy-disallow-7
|
||||
- action: CHALLENGE
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("SearchBot2")
|
||||
- path.startsWith("/search")
|
||||
name: robots-txt-policy-disallow-8
|
||||
- action: CHALLENGE
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("SearchBot3")
|
||||
- path.startsWith("/search")
|
||||
name: robots-txt-policy-disallow-9
|
||||
8
cmd/robots2policy/testdata/simple.json
vendored
8
cmd/robots2policy/testdata/simple.json
vendored
@@ -1,12 +1,12 @@
|
||||
[
|
||||
{
|
||||
"action": "CHALLENGE",
|
||||
"expression": "path.startsWith(\"/admin/\")",
|
||||
"name": "robots-txt-policy-disallow-1",
|
||||
"action": "CHALLENGE"
|
||||
"name": "robots-txt-policy-disallow-1"
|
||||
},
|
||||
{
|
||||
"action": "CHALLENGE",
|
||||
"expression": "path.startsWith(\"/private\")",
|
||||
"name": "robots-txt-policy-disallow-2",
|
||||
"action": "CHALLENGE"
|
||||
"name": "robots-txt-policy-disallow-2"
|
||||
}
|
||||
]
|
||||
29
data/botPolicies.json
Normal file
29
data/botPolicies.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"bots": [
|
||||
{
|
||||
"import": "(data)/bots/_deny-pathological.yaml"
|
||||
},
|
||||
{
|
||||
"import": "(data)/meta/ai-block-aggressive.yaml"
|
||||
},
|
||||
{
|
||||
"import": "(data)/crawlers/_allow-good.yaml"
|
||||
},
|
||||
{
|
||||
"import": "(data)/bots/aggressive-brazilian-scrapers.yaml"
|
||||
},
|
||||
{
|
||||
"import": "(data)/common/keep-internet-working.yaml"
|
||||
},
|
||||
{
|
||||
"name": "generic-browser",
|
||||
"user_agent_regex": "Mozilla|Opera",
|
||||
"action": "CHALLENGE"
|
||||
}
|
||||
],
|
||||
"dnsbl": false,
|
||||
"status_codes": {
|
||||
"CHALLENGE": 200,
|
||||
"DENY": 200
|
||||
}
|
||||
}
|
||||
@@ -11,12 +11,9 @@
|
||||
## /usr/share/docs/anubis/data or in the tarball you extracted Anubis from.
|
||||
|
||||
bots:
|
||||
# You can import the entire default config with this macro:
|
||||
# - import: (data)/meta/default-config.yaml
|
||||
|
||||
# Pathological bots to deny
|
||||
- # This correlates to data/bots/_deny-pathological.yaml in the source tree
|
||||
# https://github.com/TecharoHQ/anubis/blob/main/data/bots/_deny-pathological.yaml
|
||||
- # This correlates to data/bots/deny-pathological.yaml in the source tree
|
||||
# https://github.com/TecharoHQ/anubis/blob/main/data/bots/deny-pathological.yaml
|
||||
import: (data)/bots/_deny-pathological.yaml
|
||||
- import: (data)/bots/aggressive-brazilian-scrapers.yaml
|
||||
|
||||
@@ -50,7 +47,8 @@ bots:
|
||||
# user_agent_regex: (?i:bot|crawler)
|
||||
# action: CHALLENGE
|
||||
# challenge:
|
||||
# difficulty: 16 # impossible
|
||||
# difficulty: 16 # impossible
|
||||
# report_as: 4 # lie to the operator
|
||||
# algorithm: slow # intentionally waste CPU cycles and time
|
||||
|
||||
# Requires a subscription to Thoth to use, see
|
||||
@@ -95,53 +93,6 @@ bots:
|
||||
# weight:
|
||||
# adjust: -10
|
||||
|
||||
# Assert behaviour that only genuine browsers display. This ensures that Chrome
|
||||
# or Firefox versions
|
||||
- name: realistic-browser-catchall
|
||||
expression:
|
||||
all:
|
||||
- '"User-Agent" in headers'
|
||||
- '( userAgent.contains("Firefox") ) || ( userAgent.contains("Chrome") ) || ( userAgent.contains("Safari") )'
|
||||
- '"Accept" in headers'
|
||||
- '"Sec-Fetch-Dest" in headers'
|
||||
- '"Sec-Fetch-Mode" in headers'
|
||||
- '"Sec-Fetch-Site" in headers'
|
||||
- '"Accept-Encoding" in headers'
|
||||
- '( headers["Accept-Encoding"].contains("zstd") || headers["Accept-Encoding"].contains("br") )'
|
||||
- '"Accept-Language" in headers'
|
||||
action: WEIGH
|
||||
weight:
|
||||
adjust: -10
|
||||
|
||||
# The Upgrade-Insecure-Requests header is typically sent by browsers, but not always
|
||||
- name: upgrade-insecure-requests
|
||||
expression: '"Upgrade-Insecure-Requests" in headers'
|
||||
action: WEIGH
|
||||
weight:
|
||||
adjust: -2
|
||||
|
||||
# Chrome should behave like Chrome
|
||||
- name: chrome-is-proper
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("Chrome")
|
||||
- '"Sec-Ch-Ua" in headers'
|
||||
- 'headers["Sec-Ch-Ua"].contains("Chromium")'
|
||||
- '"Sec-Ch-Ua-Mobile" in headers'
|
||||
- '"Sec-Ch-Ua-Platform" in headers'
|
||||
action: WEIGH
|
||||
weight:
|
||||
adjust: -5
|
||||
|
||||
- name: should-have-accept
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("Mozilla")
|
||||
- '!("Accept" in headers)'
|
||||
action: WEIGH
|
||||
weight:
|
||||
adjust: 5
|
||||
|
||||
# Generic catchall rule
|
||||
- name: generic-browser
|
||||
user_agent_regex: >-
|
||||
@@ -251,6 +202,7 @@ thresholds:
|
||||
# https://anubis.techaro.lol/docs/admin/configuration/challenges/metarefresh
|
||||
algorithm: metarefresh
|
||||
difficulty: 1
|
||||
report_as: 1
|
||||
# For clients that are browser-like but have either gained points from custom rules or
|
||||
# report as a standard browser.
|
||||
- name: moderate-suspicion
|
||||
@@ -263,21 +215,13 @@ thresholds:
|
||||
# https://anubis.techaro.lol/docs/admin/configuration/challenges/proof-of-work
|
||||
algorithm: fast
|
||||
difficulty: 2 # two leading zeros, very fast for most clients
|
||||
- name: mild-proof-of-work
|
||||
expression:
|
||||
all:
|
||||
- weight >= 20
|
||||
- weight < 30
|
||||
report_as: 2
|
||||
# For clients that are browser like and have gained many points from custom rules
|
||||
- name: extreme-suspicion
|
||||
expression: weight >= 20
|
||||
action: CHALLENGE
|
||||
challenge:
|
||||
# https://anubis.techaro.lol/docs/admin/configuration/challenges/proof-of-work
|
||||
algorithm: fast
|
||||
difficulty: 4
|
||||
# For clients that are browser like and have gained many points from custom rules
|
||||
- name: extreme-suspicion
|
||||
expression: weight >= 30
|
||||
action: CHALLENGE
|
||||
challenge:
|
||||
# https://anubis.techaro.lol/docs/admin/configuration/challenges/proof-of-work
|
||||
algorithm: fast
|
||||
difficulty: 6
|
||||
report_as: 4
|
||||
|
||||
@@ -1,6 +1,3 @@
|
||||
- import: (data)/bots/cloudflare-workers.yaml
|
||||
- import: (data)/bots/headless-browsers.yaml
|
||||
- import: (data)/bots/us-ai-scraper.yaml
|
||||
- import: (data)/bots/custom-async-http-client.yaml
|
||||
- import: (data)/crawlers/alibaba-cloud.yaml
|
||||
- import: (data)/crawlers/huawei-cloud.yaml
|
||||
- import: (data)/bots/us-ai-scraper.yaml
|
||||
@@ -4,5 +4,5 @@
|
||||
# CCBot is allowed because if Common Crawl is allowed, then scrapers don't need to scrape to get the data.
|
||||
- name: "ai-robots-txt"
|
||||
user_agent_regex: >-
|
||||
AddSearchBot|AI2Bot|Ai2Bot-Dolma|aiHitBot|Amazonbot|Andibot|anthropic-ai|Applebot|Applebot-Extended|Awario|bedrockbot|bigsur.ai|Brightbot 1.0|Bytespider|CCBot|ChatGPT Agent|ChatGPT-User|Claude-SearchBot|Claude-User|Claude-Web|ClaudeBot|CloudVertexBot|cohere-ai|cohere-training-data-crawler|Cotoyogi|Crawlspace|Datenbank Crawler|Devin|Diffbot|DuckAssistBot|Echobot Bot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Gemini-Deep-Research|Google-CloudVertexBot|Google-Extended|GoogleAgent-Mariner|GoogleOther|GoogleOther-Image|GoogleOther-Video|GPTBot|iaskspider/2.0|ICC-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo Bot|LinerBot|meta-externalagent|Meta-ExternalAgent|meta-externalfetcher|Meta-ExternalFetcher|MistralAI-User|MistralAI-User/1.0|MyCentralAIScraperBot|netEstate Imprint Crawler|NovaAct|OAI-SearchBot|omgili|omgilibot|OpenAI|Operator|PanguBot|Panscient|panscient.com|Perplexity-User|PerplexityBot|PetalBot|PhindBot|Poseidon Research Crawler|QualifiedBot|QuillBot|quillbot.com|SBIntuitionsBot|Scrapy|SemrushBot-OCOB|SemrushBot-SWA|Sidetrade indexer bot|Thinkbot|TikTokSpider|Timpibot|VelenPublicWebCrawler|WARDBot|Webzio-Extended|wpbot|YaK|YandexAdditional|YandexAdditionalBot|YouBot
|
||||
AI2Bot|Ai2Bot-Dolma|aiHitBot|Amazonbot|Andibot|anthropic-ai|Applebot|Applebot-Extended|bedrockbot|Brightbot 1.0|Bytespider|ChatGPT-User|Claude-SearchBot|Claude-User|Claude-Web|ClaudeBot|cohere-ai|cohere-training-data-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|EchoboxBot|FacebookBot|facebookexternalhit|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google-CloudVertexBot|Google-Extended|GoogleOther|GoogleOther-Image|GoogleOther-Video|GPTBot|iaskspider/2.0|ICC-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo Bot|meta-externalagent|Meta-ExternalAgent|meta-externalfetcher|Meta-ExternalFetcher|MistralAI-User/1.0|MyCentralAIScraperBot|NovaAct|OAI-SearchBot|omgili|omgilibot|Operator|PanguBot|Panscient|panscient.com|Perplexity-User|PerplexityBot|PetalBot|PhindBot|Poseidon Research Crawler|QualifiedBot|QuillBot|quillbot.com|SBIntuitionsBot|Scrapy|SemrushBot|SemrushBot-BA|SemrushBot-CT|SemrushBot-OCOB|SemrushBot-SI|SemrushBot-SWA|Sidetrade indexer bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio-Extended|wpbot|YandexAdditional|YandexAdditionalBot|YouBot
|
||||
action: DENY
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
- name: "custom-async-http-client"
|
||||
user_agent_regex: "Custom-AsyncHttpClient"
|
||||
action: WEIGH
|
||||
weight:
|
||||
adjust: 10
|
||||
@@ -1,60 +0,0 @@
|
||||
- name: allow-docker-client
|
||||
action: ALLOW
|
||||
expression:
|
||||
all:
|
||||
- path.startsWith("/v2/")
|
||||
- userAgent.contains("docker/")
|
||||
- userAgent.contains("git-commit/")
|
||||
- '"Accept" in headers'
|
||||
- headers["Accept"].contains("vnd.docker.distribution")
|
||||
- '"Baggage" in headers'
|
||||
- headers["Baggage"].contains("trigger")
|
||||
|
||||
- name: allow-crane-client
|
||||
action: ALLOW
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("crane/")
|
||||
- userAgent.contains("go-containerregistry/")
|
||||
|
||||
- name: allow-docker-distribution-api-client
|
||||
action: ALLOW
|
||||
expression:
|
||||
all:
|
||||
- '"Docker-Distribution-Api-Version" in headers'
|
||||
- '!(userAgent.contains("Mozilla"))'
|
||||
|
||||
- name: allow-go-containerregistry-client
|
||||
action: ALLOW
|
||||
expression:
|
||||
all:
|
||||
- path.startsWith("/v2/")
|
||||
- userAgent.contains("go-containerregistry/")
|
||||
|
||||
- name: allow-buildah
|
||||
action: ALLOW
|
||||
expression:
|
||||
all:
|
||||
- path.startsWith("/v2/")
|
||||
- userAgent.contains("Buildah/")
|
||||
|
||||
- name: allow-podman
|
||||
action: ALLOW
|
||||
expression:
|
||||
all:
|
||||
- path.startsWith("/v2/")
|
||||
- userAgent.contains("containers/")
|
||||
|
||||
- name: allow-containerd
|
||||
action: ALLOW
|
||||
expression:
|
||||
all:
|
||||
- path.startsWith("/v2/")
|
||||
- userAgent.contains("containerd/")
|
||||
|
||||
- name: allow-renovate
|
||||
action: ALLOW
|
||||
expression:
|
||||
all:
|
||||
- path.startsWith("/v2/")
|
||||
- userAgent.contains("Renovate/")
|
||||
@@ -2,19 +2,13 @@
|
||||
action: ALLOW
|
||||
expression:
|
||||
all:
|
||||
- >
|
||||
(
|
||||
userAgent.startsWith("git/") ||
|
||||
userAgent.contains("libgit") ||
|
||||
userAgent.startsWith("go-git") ||
|
||||
userAgent.startsWith("JGit/") ||
|
||||
userAgent.startsWith("JGit-")
|
||||
)
|
||||
- '"Accept" in headers'
|
||||
- headers["Accept"] == "*/*"
|
||||
- '"Cache-Control" in headers'
|
||||
- headers["Cache-Control"] == "no-cache"
|
||||
- '"Pragma" in headers'
|
||||
- headers["Pragma"] == "no-cache"
|
||||
- '"Accept-Encoding" in headers'
|
||||
- headers["Accept-Encoding"].contains("gzip")
|
||||
- >
|
||||
(
|
||||
userAgent.startsWith("git/") ||
|
||||
userAgent.contains("libgit") ||
|
||||
userAgent.startsWith("go-git") ||
|
||||
userAgent.startsWith("JGit/") ||
|
||||
userAgent.startsWith("JGit-")
|
||||
)
|
||||
- '"Git-Protocol" in headers'
|
||||
- headers["Git-Protocol"] == "version=2"
|
||||
@@ -1,6 +0,0 @@
|
||||
- name: telegrambot
|
||||
action: ALLOW
|
||||
expression:
|
||||
all:
|
||||
- userAgent.matches("TelegramBot")
|
||||
- verifyFCrDNS(remoteAddress, "ptr\\.telegram\\.org$")
|
||||
@@ -1,6 +0,0 @@
|
||||
- name: vkbot
|
||||
action: ALLOW
|
||||
expression:
|
||||
all:
|
||||
- userAgent.matches("vkShare[^+]+\\+http\\://vk\\.com/dev/Share")
|
||||
- verifyFCrDNS(remoteAddress, "^snipster\\d+\\.go\\.mail\\.ru$")
|
||||
@@ -1,13 +1,13 @@
|
||||
# Common "keeping the internet working" routes
|
||||
- name: well-known
|
||||
path_regex: ^/\.well-known/.*$
|
||||
path_regex: ^/.well-known/.*$
|
||||
action: ALLOW
|
||||
- name: favicon
|
||||
path_regex: ^/favicon\.(?:ico|png|gif|jpg|jpeg|svg)$
|
||||
path_regex: ^/favicon.ico$
|
||||
action: ALLOW
|
||||
- name: robots-txt
|
||||
path_regex: ^/robots\.txt$
|
||||
path_regex: ^/robots.txt$
|
||||
action: ALLOW
|
||||
- name: sitemap
|
||||
path_regex: ^/sitemap\.xml$
|
||||
action: ALLOW
|
||||
path_regex: ^/sitemap.xml$
|
||||
action: ALLOW
|
||||
@@ -8,4 +8,3 @@
|
||||
- import: (data)/crawlers/marginalia.yaml
|
||||
- import: (data)/crawlers/mojeekbot.yaml
|
||||
- import: (data)/crawlers/commoncrawl.yaml
|
||||
- import: (data)/crawlers/yandexbot.yaml
|
||||
|
||||
@@ -1,881 +0,0 @@
|
||||
- name: alibaba-cloud
|
||||
action: DENY
|
||||
# Updated 2025-08-20 from IP addresses for AS45102
|
||||
remote_addresses:
|
||||
- 103.81.186.0/23
|
||||
- 110.76.21.0/24
|
||||
- 110.76.23.0/24
|
||||
- 116.251.64.0/18
|
||||
- 139.95.0.0/23
|
||||
- 139.95.10.0/23
|
||||
- 139.95.12.0/23
|
||||
- 139.95.14.0/23
|
||||
- 139.95.16.0/23
|
||||
- 139.95.18.0/23
|
||||
- 139.95.2.0/23
|
||||
- 139.95.4.0/23
|
||||
- 139.95.6.0/23
|
||||
- 139.95.64.0/24
|
||||
- 139.95.8.0/23
|
||||
- 14.1.112.0/22
|
||||
- 14.1.115.0/24
|
||||
- 140.205.1.0/24
|
||||
- 140.205.122.0/24
|
||||
- 147.139.0.0/17
|
||||
- 147.139.0.0/18
|
||||
- 147.139.128.0/17
|
||||
- 147.139.128.0/18
|
||||
- 147.139.155.0/24
|
||||
- 147.139.192.0/18
|
||||
- 147.139.64.0/18
|
||||
- 149.129.0.0/20
|
||||
- 149.129.0.0/21
|
||||
- 149.129.16.0/23
|
||||
- 149.129.192.0/18
|
||||
- 149.129.192.0/19
|
||||
- 149.129.224.0/19
|
||||
- 149.129.32.0/19
|
||||
- 149.129.64.0/18
|
||||
- 149.129.64.0/19
|
||||
- 149.129.8.0/21
|
||||
- 149.129.96.0/19
|
||||
- 156.227.20.0/24
|
||||
- 156.236.12.0/24
|
||||
- 156.236.17.0/24
|
||||
- 156.240.76.0/23
|
||||
- 156.245.1.0/24
|
||||
- 161.117.0.0/16
|
||||
- 161.117.0.0/17
|
||||
- 161.117.126.0/24
|
||||
- 161.117.127.0/24
|
||||
- 161.117.128.0/17
|
||||
- 161.117.128.0/24
|
||||
- 161.117.129.0/24
|
||||
- 161.117.138.0/24
|
||||
- 161.117.143.0/24
|
||||
- 170.33.104.0/24
|
||||
- 170.33.105.0/24
|
||||
- 170.33.106.0/24
|
||||
- 170.33.107.0/24
|
||||
- 170.33.136.0/24
|
||||
- 170.33.137.0/24
|
||||
- 170.33.138.0/24
|
||||
- 170.33.20.0/24
|
||||
- 170.33.21.0/24
|
||||
- 170.33.22.0/24
|
||||
- 170.33.23.0/24
|
||||
- 170.33.24.0/24
|
||||
- 170.33.29.0/24
|
||||
- 170.33.30.0/24
|
||||
- 170.33.31.0/24
|
||||
- 170.33.32.0/24
|
||||
- 170.33.33.0/24
|
||||
- 170.33.34.0/24
|
||||
- 170.33.35.0/24
|
||||
- 170.33.64.0/24
|
||||
- 170.33.65.0/24
|
||||
- 170.33.66.0/24
|
||||
- 170.33.68.0/24
|
||||
- 170.33.69.0/24
|
||||
- 170.33.72.0/24
|
||||
- 170.33.73.0/24
|
||||
- 170.33.76.0/24
|
||||
- 170.33.77.0/24
|
||||
- 170.33.78.0/24
|
||||
- 170.33.79.0/24
|
||||
- 170.33.80.0/24
|
||||
- 170.33.81.0/24
|
||||
- 170.33.82.0/24
|
||||
- 170.33.83.0/24
|
||||
- 170.33.84.0/24
|
||||
- 170.33.85.0/24
|
||||
- 170.33.86.0/24
|
||||
- 170.33.88.0/24
|
||||
- 170.33.90.0/24
|
||||
- 170.33.92.0/24
|
||||
- 170.33.93.0/24
|
||||
- 185.78.106.0/23
|
||||
- 198.11.128.0/18
|
||||
- 198.11.137.0/24
|
||||
- 198.11.184.0/21
|
||||
- 202.144.199.0/24
|
||||
- 203.107.64.0/24
|
||||
- 203.107.65.0/24
|
||||
- 203.107.66.0/24
|
||||
- 203.107.67.0/24
|
||||
- 203.107.68.0/24
|
||||
- 205.204.102.0/23
|
||||
- 205.204.111.0/24
|
||||
- 205.204.117.0/24
|
||||
- 205.204.125.0/24
|
||||
- 205.204.96.0/19
|
||||
- 223.5.5.0/24
|
||||
- 223.6.6.0/24
|
||||
- 2400:3200::/48
|
||||
- 2400:3200:baba::/48
|
||||
- 2400:b200:4100::/48
|
||||
- 2400:b200:4101::/48
|
||||
- 2400:b200:4102::/48
|
||||
- 2400:b200:4103::/48
|
||||
- 2401:8680:4100::/48
|
||||
- 2401:b180:4100::/48
|
||||
- 2404:2280:1000::/36
|
||||
- 2404:2280:1000::/37
|
||||
- 2404:2280:1800::/37
|
||||
- 2404:2280:2000::/36
|
||||
- 2404:2280:2000::/37
|
||||
- 2404:2280:2800::/37
|
||||
- 2404:2280:3000::/36
|
||||
- 2404:2280:3000::/37
|
||||
- 2404:2280:3800::/37
|
||||
- 2404:2280:4000::/36
|
||||
- 2404:2280:4000::/37
|
||||
- 2404:2280:4800::/37
|
||||
- 2408:4000:1000::/48
|
||||
- 2408:4009:500::/48
|
||||
- 240b:4000::/32
|
||||
- 240b:4000::/33
|
||||
- 240b:4000:8000::/33
|
||||
- 240b:4000:fffe::/48
|
||||
- 240b:4001::/32
|
||||
- 240b:4001::/33
|
||||
- 240b:4001:8000::/33
|
||||
- 240b:4002::/32
|
||||
- 240b:4002::/33
|
||||
- 240b:4002:8000::/33
|
||||
- 240b:4004::/32
|
||||
- 240b:4004::/33
|
||||
- 240b:4004:8000::/33
|
||||
- 240b:4005::/32
|
||||
- 240b:4005::/33
|
||||
- 240b:4005:8000::/33
|
||||
- 240b:4006::/48
|
||||
- 240b:4006:1000::/44
|
||||
- 240b:4006:1000::/45
|
||||
- 240b:4006:1000::/47
|
||||
- 240b:4006:1002::/47
|
||||
- 240b:4006:1008::/45
|
||||
- 240b:4006:1010::/44
|
||||
- 240b:4006:1010::/45
|
||||
- 240b:4006:1018::/45
|
||||
- 240b:4006:1020::/44
|
||||
- 240b:4006:1020::/45
|
||||
- 240b:4006:1028::/45
|
||||
- 240b:4007::/32
|
||||
- 240b:4007::/33
|
||||
- 240b:4007:8000::/33
|
||||
- 240b:4009::/32
|
||||
- 240b:4009::/33
|
||||
- 240b:4009:8000::/33
|
||||
- 240b:400b::/32
|
||||
- 240b:400b::/33
|
||||
- 240b:400b:8000::/33
|
||||
- 240b:400c::/32
|
||||
- 240b:400c::/33
|
||||
- 240b:400c::/40
|
||||
- 240b:400c::/41
|
||||
- 240b:400c:100::/40
|
||||
- 240b:400c:100::/41
|
||||
- 240b:400c:180::/41
|
||||
- 240b:400c:80::/41
|
||||
- 240b:400c:8000::/33
|
||||
- 240b:400c:f00::/48
|
||||
- 240b:400c:f01::/48
|
||||
- 240b:400c:ffff::/48
|
||||
- 240b:400d::/32
|
||||
- 240b:400d::/33
|
||||
- 240b:400d:8000::/33
|
||||
- 240b:400e::/32
|
||||
- 240b:400e::/33
|
||||
- 240b:400e:8000::/33
|
||||
- 240b:400f::/32
|
||||
- 240b:400f::/33
|
||||
- 240b:400f:8000::/33
|
||||
- 240b:4011::/32
|
||||
- 240b:4011::/33
|
||||
- 240b:4011:8000::/33
|
||||
- 240b:4012::/48
|
||||
- 240b:4013::/32
|
||||
- 240b:4013::/33
|
||||
- 240b:4013:8000::/33
|
||||
- 240b:4014::/32
|
||||
- 240b:4014::/33
|
||||
- 240b:4014:8000::/33
|
||||
- 43.100.0.0/15
|
||||
- 43.100.0.0/16
|
||||
- 43.101.0.0/16
|
||||
- 43.102.0.0/20
|
||||
- 43.102.112.0/20
|
||||
- 43.102.16.0/20
|
||||
- 43.102.32.0/20
|
||||
- 43.102.48.0/20
|
||||
- 43.102.64.0/20
|
||||
- 43.102.80.0/20
|
||||
- 43.102.96.0/20
|
||||
- 43.103.0.0/17
|
||||
- 43.103.0.0/18
|
||||
- 43.103.64.0/18
|
||||
- 43.104.0.0/15
|
||||
- 43.104.0.0/16
|
||||
- 43.105.0.0/16
|
||||
- 43.108.0.0/17
|
||||
- 43.108.0.0/18
|
||||
- 43.108.64.0/18
|
||||
- 43.91.0.0/16
|
||||
- 43.91.0.0/17
|
||||
- 43.91.128.0/17
|
||||
- 43.96.10.0/24
|
||||
- 43.96.100.0/24
|
||||
- 43.96.101.0/24
|
||||
- 43.96.102.0/24
|
||||
- 43.96.104.0/24
|
||||
- 43.96.11.0/24
|
||||
- 43.96.20.0/24
|
||||
- 43.96.21.0/24
|
||||
- 43.96.23.0/24
|
||||
- 43.96.24.0/24
|
||||
- 43.96.25.0/24
|
||||
- 43.96.3.0/24
|
||||
- 43.96.32.0/24
|
||||
- 43.96.33.0/24
|
||||
- 43.96.34.0/24
|
||||
- 43.96.35.0/24
|
||||
- 43.96.4.0/24
|
||||
- 43.96.40.0/24
|
||||
- 43.96.5.0/24
|
||||
- 43.96.52.0/24
|
||||
- 43.96.6.0/24
|
||||
- 43.96.66.0/24
|
||||
- 43.96.67.0/24
|
||||
- 43.96.68.0/24
|
||||
- 43.96.69.0/24
|
||||
- 43.96.7.0/24
|
||||
- 43.96.70.0/24
|
||||
- 43.96.71.0/24
|
||||
- 43.96.72.0/24
|
||||
- 43.96.73.0/24
|
||||
- 43.96.74.0/24
|
||||
- 43.96.75.0/24
|
||||
- 43.96.8.0/24
|
||||
- 43.96.80.0/24
|
||||
- 43.96.81.0/24
|
||||
- 43.96.84.0/24
|
||||
- 43.96.85.0/24
|
||||
- 43.96.86.0/24
|
||||
- 43.96.88.0/24
|
||||
- 43.96.9.0/24
|
||||
- 43.96.96.0/24
|
||||
- 43.98.0.0/16
|
||||
- 43.98.0.0/17
|
||||
- 43.98.128.0/17
|
||||
- 43.99.0.0/16
|
||||
- 43.99.0.0/17
|
||||
- 43.99.128.0/17
|
||||
- 45.199.179.0/24
|
||||
- 47.235.0.0/22
|
||||
- 47.235.0.0/23
|
||||
- 47.235.1.0/24
|
||||
- 47.235.10.0/23
|
||||
- 47.235.10.0/24
|
||||
- 47.235.11.0/24
|
||||
- 47.235.12.0/23
|
||||
- 47.235.12.0/24
|
||||
- 47.235.13.0/24
|
||||
- 47.235.16.0/23
|
||||
- 47.235.16.0/24
|
||||
- 47.235.18.0/23
|
||||
- 47.235.18.0/24
|
||||
- 47.235.19.0/24
|
||||
- 47.235.2.0/23
|
||||
- 47.235.20.0/24
|
||||
- 47.235.21.0/24
|
||||
- 47.235.22.0/24
|
||||
- 47.235.23.0/24
|
||||
- 47.235.24.0/22
|
||||
- 47.235.24.0/23
|
||||
- 47.235.26.0/23
|
||||
- 47.235.28.0/23
|
||||
- 47.235.28.0/24
|
||||
- 47.235.29.0/24
|
||||
- 47.235.30.0/24
|
||||
- 47.235.31.0/24
|
||||
- 47.235.4.0/24
|
||||
- 47.235.5.0/24
|
||||
- 47.235.6.0/23
|
||||
- 47.235.6.0/24
|
||||
- 47.235.7.0/24
|
||||
- 47.235.8.0/24
|
||||
- 47.235.9.0/24
|
||||
- 47.236.0.0/15
|
||||
- 47.236.0.0/16
|
||||
- 47.237.0.0/16
|
||||
- 47.237.32.0/20
|
||||
- 47.237.34.0/24
|
||||
- 47.238.0.0/15
|
||||
- 47.238.0.0/16
|
||||
- 47.239.0.0/16
|
||||
- 47.240.0.0/16
|
||||
- 47.240.0.0/17
|
||||
- 47.240.128.0/17
|
||||
- 47.241.0.0/16
|
||||
- 47.241.0.0/17
|
||||
- 47.241.128.0/17
|
||||
- 47.242.0.0/15
|
||||
- 47.242.0.0/16
|
||||
- 47.243.0.0/16
|
||||
- 47.244.0.0/16
|
||||
- 47.244.0.0/17
|
||||
- 47.244.128.0/17
|
||||
- 47.244.73.0/24
|
||||
- 47.245.0.0/18
|
||||
- 47.245.0.0/19
|
||||
- 47.245.128.0/17
|
||||
- 47.245.128.0/18
|
||||
- 47.245.192.0/18
|
||||
- 47.245.32.0/19
|
||||
- 47.245.64.0/18
|
||||
- 47.245.64.0/19
|
||||
- 47.245.96.0/19
|
||||
- 47.246.100.0/22
|
||||
- 47.246.104.0/21
|
||||
- 47.246.104.0/22
|
||||
- 47.246.108.0/22
|
||||
- 47.246.120.0/24
|
||||
- 47.246.122.0/24
|
||||
- 47.246.123.0/24
|
||||
- 47.246.124.0/24
|
||||
- 47.246.125.0/24
|
||||
- 47.246.128.0/22
|
||||
- 47.246.128.0/23
|
||||
- 47.246.130.0/23
|
||||
- 47.246.132.0/22
|
||||
- 47.246.132.0/23
|
||||
- 47.246.134.0/23
|
||||
- 47.246.136.0/21
|
||||
- 47.246.136.0/22
|
||||
- 47.246.140.0/22
|
||||
- 47.246.144.0/23
|
||||
- 47.246.144.0/24
|
||||
- 47.246.145.0/24
|
||||
- 47.246.146.0/23
|
||||
- 47.246.146.0/24
|
||||
- 47.246.147.0/24
|
||||
- 47.246.150.0/23
|
||||
- 47.246.150.0/24
|
||||
- 47.246.151.0/24
|
||||
- 47.246.152.0/23
|
||||
- 47.246.152.0/24
|
||||
- 47.246.153.0/24
|
||||
- 47.246.154.0/24
|
||||
- 47.246.155.0/24
|
||||
- 47.246.156.0/22
|
||||
- 47.246.156.0/23
|
||||
- 47.246.158.0/23
|
||||
- 47.246.160.0/20
|
||||
- 47.246.160.0/21
|
||||
- 47.246.168.0/21
|
||||
- 47.246.176.0/20
|
||||
- 47.246.176.0/21
|
||||
- 47.246.184.0/21
|
||||
- 47.246.192.0/22
|
||||
- 47.246.192.0/23
|
||||
- 47.246.194.0/23
|
||||
- 47.246.196.0/22
|
||||
- 47.246.196.0/23
|
||||
- 47.246.198.0/23
|
||||
- 47.246.32.0/22
|
||||
- 47.246.66.0/24
|
||||
- 47.246.67.0/24
|
||||
- 47.246.68.0/23
|
||||
- 47.246.68.0/24
|
||||
- 47.246.69.0/24
|
||||
- 47.246.72.0/21
|
||||
- 47.246.72.0/22
|
||||
- 47.246.76.0/22
|
||||
- 47.246.80.0/24
|
||||
- 47.246.82.0/23
|
||||
- 47.246.82.0/24
|
||||
- 47.246.83.0/24
|
||||
- 47.246.84.0/22
|
||||
- 47.246.84.0/23
|
||||
- 47.246.86.0/23
|
||||
- 47.246.88.0/22
|
||||
- 47.246.88.0/23
|
||||
- 47.246.90.0/23
|
||||
- 47.246.92.0/23
|
||||
- 47.246.92.0/24
|
||||
- 47.246.93.0/24
|
||||
- 47.246.96.0/21
|
||||
- 47.246.96.0/22
|
||||
- 47.250.0.0/17
|
||||
- 47.250.0.0/18
|
||||
- 47.250.128.0/17
|
||||
- 47.250.128.0/18
|
||||
- 47.250.192.0/18
|
||||
- 47.250.64.0/18
|
||||
- 47.250.99.0/24
|
||||
- 47.251.0.0/16
|
||||
- 47.251.0.0/17
|
||||
- 47.251.128.0/17
|
||||
- 47.251.224.0/22
|
||||
- 47.252.0.0/17
|
||||
- 47.252.0.0/18
|
||||
- 47.252.128.0/17
|
||||
- 47.252.128.0/18
|
||||
- 47.252.192.0/18
|
||||
- 47.252.64.0/18
|
||||
- 47.252.67.0/24
|
||||
- 47.253.0.0/16
|
||||
- 47.253.0.0/17
|
||||
- 47.253.128.0/17
|
||||
- 47.254.0.0/17
|
||||
- 47.254.0.0/18
|
||||
- 47.254.113.0/24
|
||||
- 47.254.128.0/18
|
||||
- 47.254.128.0/19
|
||||
- 47.254.160.0/19
|
||||
- 47.254.192.0/18
|
||||
- 47.254.192.0/19
|
||||
- 47.254.224.0/19
|
||||
- 47.254.64.0/18
|
||||
- 47.52.0.0/16
|
||||
- 47.52.0.0/17
|
||||
- 47.52.128.0/17
|
||||
- 47.56.0.0/15
|
||||
- 47.56.0.0/16
|
||||
- 47.57.0.0/16
|
||||
- 47.74.0.0/18
|
||||
- 47.74.0.0/19
|
||||
- 47.74.0.0/21
|
||||
- 47.74.128.0/17
|
||||
- 47.74.128.0/18
|
||||
- 47.74.192.0/18
|
||||
- 47.74.32.0/19
|
||||
- 47.74.64.0/18
|
||||
- 47.74.64.0/19
|
||||
- 47.74.96.0/19
|
||||
- 47.75.0.0/16
|
||||
- 47.75.0.0/17
|
||||
- 47.75.128.0/17
|
||||
- 47.76.0.0/16
|
||||
- 47.76.0.0/17
|
||||
- 47.76.128.0/17
|
||||
- 47.77.0.0/22
|
||||
- 47.77.0.0/23
|
||||
- 47.77.104.0/21
|
||||
- 47.77.12.0/22
|
||||
- 47.77.128.0/17
|
||||
- 47.77.128.0/18
|
||||
- 47.77.128.0/21
|
||||
- 47.77.136.0/21
|
||||
- 47.77.144.0/21
|
||||
- 47.77.152.0/21
|
||||
- 47.77.16.0/21
|
||||
- 47.77.16.0/22
|
||||
- 47.77.192.0/18
|
||||
- 47.77.2.0/23
|
||||
- 47.77.20.0/22
|
||||
- 47.77.24.0/22
|
||||
- 47.77.24.0/23
|
||||
- 47.77.26.0/23
|
||||
- 47.77.32.0/19
|
||||
- 47.77.32.0/20
|
||||
- 47.77.4.0/22
|
||||
- 47.77.4.0/23
|
||||
- 47.77.48.0/20
|
||||
- 47.77.6.0/23
|
||||
- 47.77.64.0/19
|
||||
- 47.77.64.0/20
|
||||
- 47.77.8.0/21
|
||||
- 47.77.8.0/22
|
||||
- 47.77.80.0/20
|
||||
- 47.77.96.0/20
|
||||
- 47.77.96.0/21
|
||||
- 47.78.0.0/17
|
||||
- 47.78.128.0/17
|
||||
- 47.79.0.0/20
|
||||
- 47.79.0.0/21
|
||||
- 47.79.104.0/21
|
||||
- 47.79.112.0/20
|
||||
- 47.79.128.0/19
|
||||
- 47.79.128.0/20
|
||||
- 47.79.144.0/20
|
||||
- 47.79.16.0/20
|
||||
- 47.79.16.0/21
|
||||
- 47.79.192.0/18
|
||||
- 47.79.192.0/19
|
||||
- 47.79.224.0/19
|
||||
- 47.79.24.0/21
|
||||
- 47.79.32.0/20
|
||||
- 47.79.32.0/21
|
||||
- 47.79.40.0/21
|
||||
- 47.79.48.0/20
|
||||
- 47.79.48.0/21
|
||||
- 47.79.52.0/23
|
||||
- 47.79.54.0/23
|
||||
- 47.79.56.0/21
|
||||
- 47.79.56.0/23
|
||||
- 47.79.58.0/23
|
||||
- 47.79.60.0/23
|
||||
- 47.79.62.0/23
|
||||
- 47.79.64.0/20
|
||||
- 47.79.64.0/21
|
||||
- 47.79.72.0/21
|
||||
- 47.79.8.0/21
|
||||
- 47.79.80.0/20
|
||||
- 47.79.80.0/21
|
||||
- 47.79.83.0/24
|
||||
- 47.79.88.0/21
|
||||
- 47.79.96.0/19
|
||||
- 47.79.96.0/20
|
||||
- 47.80.0.0/18
|
||||
- 47.80.0.0/19
|
||||
- 47.80.128.0/17
|
||||
- 47.80.128.0/18
|
||||
- 47.80.192.0/18
|
||||
- 47.80.32.0/19
|
||||
- 47.80.64.0/18
|
||||
- 47.80.64.0/19
|
||||
- 47.80.96.0/19
|
||||
- 47.81.0.0/18
|
||||
- 47.81.0.0/19
|
||||
- 47.81.128.0/17
|
||||
- 47.81.128.0/18
|
||||
- 47.81.192.0/18
|
||||
- 47.81.32.0/19
|
||||
- 47.81.64.0/18
|
||||
- 47.81.64.0/19
|
||||
- 47.81.96.0/19
|
||||
- 47.82.0.0/18
|
||||
- 47.82.0.0/19
|
||||
- 47.82.10.0/23
|
||||
- 47.82.12.0/23
|
||||
- 47.82.128.0/17
|
||||
- 47.82.128.0/18
|
||||
- 47.82.14.0/23
|
||||
- 47.82.192.0/18
|
||||
- 47.82.32.0/19
|
||||
- 47.82.32.0/21
|
||||
- 47.82.40.0/21
|
||||
- 47.82.48.0/21
|
||||
- 47.82.56.0/21
|
||||
- 47.82.64.0/18
|
||||
- 47.82.64.0/19
|
||||
- 47.82.8.0/23
|
||||
- 47.82.96.0/19
|
||||
- 47.83.0.0/16
|
||||
- 47.83.0.0/17
|
||||
- 47.83.128.0/17
|
||||
- 47.83.32.0/21
|
||||
- 47.83.40.0/21
|
||||
- 47.83.48.0/21
|
||||
- 47.83.56.0/21
|
||||
- 47.84.0.0/16
|
||||
- 47.84.0.0/17
|
||||
- 47.84.128.0/17
|
||||
- 47.84.144.0/21
|
||||
- 47.84.152.0/21
|
||||
- 47.84.160.0/21
|
||||
- 47.84.168.0/21
|
||||
- 47.85.0.0/16
|
||||
- 47.85.0.0/17
|
||||
- 47.85.112.0/22
|
||||
- 47.85.112.0/23
|
||||
- 47.85.114.0/23
|
||||
- 47.85.128.0/17
|
||||
- 47.86.0.0/16
|
||||
- 47.86.0.0/17
|
||||
- 47.86.128.0/17
|
||||
- 47.87.0.0/18
|
||||
- 47.87.0.0/19
|
||||
- 47.87.128.0/18
|
||||
- 47.87.128.0/19
|
||||
- 47.87.160.0/19
|
||||
- 47.87.192.0/22
|
||||
- 47.87.192.0/23
|
||||
- 47.87.194.0/23
|
||||
- 47.87.196.0/22
|
||||
- 47.87.196.0/23
|
||||
- 47.87.198.0/23
|
||||
- 47.87.200.0/22
|
||||
- 47.87.200.0/23
|
||||
- 47.87.202.0/23
|
||||
- 47.87.204.0/22
|
||||
- 47.87.204.0/23
|
||||
- 47.87.206.0/23
|
||||
- 47.87.208.0/22
|
||||
- 47.87.208.0/23
|
||||
- 47.87.210.0/23
|
||||
- 47.87.212.0/22
|
||||
- 47.87.212.0/23
|
||||
- 47.87.214.0/23
|
||||
- 47.87.216.0/22
|
||||
- 47.87.216.0/23
|
||||
- 47.87.218.0/23
|
||||
- 47.87.220.0/22
|
||||
- 47.87.220.0/23
|
||||
- 47.87.222.0/23
|
||||
- 47.87.224.0/22
|
||||
- 47.87.224.0/23
|
||||
- 47.87.226.0/23
|
||||
- 47.87.228.0/22
|
||||
- 47.87.228.0/23
|
||||
- 47.87.230.0/23
|
||||
- 47.87.232.0/22
|
||||
- 47.87.232.0/23
|
||||
- 47.87.234.0/23
|
||||
- 47.87.236.0/22
|
||||
- 47.87.236.0/23
|
||||
- 47.87.238.0/23
|
||||
- 47.87.240.0/22
|
||||
- 47.87.240.0/23
|
||||
- 47.87.242.0/23
|
||||
- 47.87.32.0/19
|
||||
- 47.87.64.0/18
|
||||
- 47.87.64.0/19
|
||||
- 47.87.96.0/19
|
||||
- 47.88.0.0/17
|
||||
- 47.88.0.0/18
|
||||
- 47.88.109.0/24
|
||||
- 47.88.128.0/17
|
||||
- 47.88.128.0/18
|
||||
- 47.88.135.0/24
|
||||
- 47.88.192.0/18
|
||||
- 47.88.41.0/24
|
||||
- 47.88.42.0/24
|
||||
- 47.88.43.0/24
|
||||
- 47.88.64.0/18
|
||||
- 47.89.0.0/18
|
||||
- 47.89.0.0/19
|
||||
- 47.89.100.0/24
|
||||
- 47.89.101.0/24
|
||||
- 47.89.102.0/24
|
||||
- 47.89.103.0/24
|
||||
- 47.89.104.0/21
|
||||
- 47.89.104.0/22
|
||||
- 47.89.108.0/22
|
||||
- 47.89.122.0/24
|
||||
- 47.89.123.0/24
|
||||
- 47.89.124.0/23
|
||||
- 47.89.124.0/24
|
||||
- 47.89.125.0/24
|
||||
- 47.89.128.0/18
|
||||
- 47.89.128.0/19
|
||||
- 47.89.160.0/19
|
||||
- 47.89.192.0/18
|
||||
- 47.89.192.0/19
|
||||
- 47.89.221.0/24
|
||||
- 47.89.224.0/19
|
||||
- 47.89.32.0/19
|
||||
- 47.89.72.0/22
|
||||
- 47.89.72.0/23
|
||||
- 47.89.74.0/23
|
||||
- 47.89.76.0/22
|
||||
- 47.89.76.0/23
|
||||
- 47.89.78.0/23
|
||||
- 47.89.80.0/23
|
||||
- 47.89.82.0/23
|
||||
- 47.89.84.0/24
|
||||
- 47.89.88.0/22
|
||||
- 47.89.88.0/23
|
||||
- 47.89.90.0/23
|
||||
- 47.89.92.0/22
|
||||
- 47.89.92.0/23
|
||||
- 47.89.94.0/23
|
||||
- 47.89.96.0/24
|
||||
- 47.89.97.0/24
|
||||
- 47.89.98.0/23
|
||||
- 47.89.99.0/24
|
||||
- 47.90.0.0/17
|
||||
- 47.90.0.0/18
|
||||
- 47.90.128.0/17
|
||||
- 47.90.128.0/18
|
||||
- 47.90.172.0/24
|
||||
- 47.90.173.0/24
|
||||
- 47.90.174.0/24
|
||||
- 47.90.175.0/24
|
||||
- 47.90.192.0/18
|
||||
- 47.90.64.0/18
|
||||
- 47.91.0.0/19
|
||||
- 47.91.0.0/20
|
||||
- 47.91.112.0/20
|
||||
- 47.91.128.0/17
|
||||
- 47.91.128.0/18
|
||||
- 47.91.16.0/20
|
||||
- 47.91.192.0/18
|
||||
- 47.91.32.0/19
|
||||
- 47.91.32.0/20
|
||||
- 47.91.48.0/20
|
||||
- 47.91.64.0/19
|
||||
- 47.91.64.0/20
|
||||
- 47.91.80.0/20
|
||||
- 47.91.96.0/19
|
||||
- 47.91.96.0/20
|
||||
- 5.181.224.0/23
|
||||
- 59.82.136.0/23
|
||||
- 8.208.0.0/16
|
||||
- 8.208.0.0/17
|
||||
- 8.208.0.0/18
|
||||
- 8.208.0.0/19
|
||||
- 8.208.128.0/17
|
||||
- 8.208.141.0/24
|
||||
- 8.208.32.0/19
|
||||
- 8.209.0.0/19
|
||||
- 8.209.0.0/20
|
||||
- 8.209.128.0/18
|
||||
- 8.209.128.0/19
|
||||
- 8.209.16.0/20
|
||||
- 8.209.160.0/19
|
||||
- 8.209.192.0/18
|
||||
- 8.209.192.0/19
|
||||
- 8.209.224.0/19
|
||||
- 8.209.36.0/23
|
||||
- 8.209.36.0/24
|
||||
- 8.209.37.0/24
|
||||
- 8.209.38.0/23
|
||||
- 8.209.38.0/24
|
||||
- 8.209.39.0/24
|
||||
- 8.209.40.0/22
|
||||
- 8.209.40.0/23
|
||||
- 8.209.42.0/23
|
||||
- 8.209.44.0/22
|
||||
- 8.209.44.0/23
|
||||
- 8.209.46.0/23
|
||||
- 8.209.48.0/20
|
||||
- 8.209.48.0/21
|
||||
- 8.209.56.0/21
|
||||
- 8.209.64.0/18
|
||||
- 8.209.64.0/19
|
||||
- 8.209.96.0/19
|
||||
- 8.210.0.0/16
|
||||
- 8.210.0.0/17
|
||||
- 8.210.128.0/17
|
||||
- 8.210.240.0/24
|
||||
- 8.211.0.0/17
|
||||
- 8.211.0.0/18
|
||||
- 8.211.104.0/21
|
||||
- 8.211.128.0/18
|
||||
- 8.211.128.0/19
|
||||
- 8.211.160.0/19
|
||||
- 8.211.192.0/18
|
||||
- 8.211.192.0/19
|
||||
- 8.211.224.0/19
|
||||
- 8.211.226.0/24
|
||||
- 8.211.64.0/18
|
||||
- 8.211.80.0/21
|
||||
- 8.211.88.0/21
|
||||
- 8.211.96.0/21
|
||||
- 8.212.0.0/17
|
||||
- 8.212.0.0/18
|
||||
- 8.212.128.0/18
|
||||
- 8.212.128.0/19
|
||||
- 8.212.160.0/19
|
||||
- 8.212.192.0/18
|
||||
- 8.212.192.0/19
|
||||
- 8.212.224.0/19
|
||||
- 8.212.64.0/18
|
||||
- 8.213.0.0/17
|
||||
- 8.213.0.0/18
|
||||
- 8.213.128.0/19
|
||||
- 8.213.128.0/20
|
||||
- 8.213.144.0/20
|
||||
- 8.213.160.0/21
|
||||
- 8.213.160.0/22
|
||||
- 8.213.164.0/22
|
||||
- 8.213.176.0/20
|
||||
- 8.213.176.0/21
|
||||
- 8.213.184.0/21
|
||||
- 8.213.192.0/18
|
||||
- 8.213.192.0/19
|
||||
- 8.213.224.0/19
|
||||
- 8.213.251.0/24
|
||||
- 8.213.252.0/24
|
||||
- 8.213.253.0/24
|
||||
- 8.213.64.0/18
|
||||
- 8.214.0.0/16
|
||||
- 8.214.0.0/17
|
||||
- 8.214.128.0/17
|
||||
- 8.215.0.0/16
|
||||
- 8.215.0.0/17
|
||||
- 8.215.128.0/17
|
||||
- 8.215.160.0/24
|
||||
- 8.215.162.0/23
|
||||
- 8.215.168.0/24
|
||||
- 8.215.169.0/24
|
||||
- 8.216.0.0/17
|
||||
- 8.216.0.0/18
|
||||
- 8.216.128.0/17
|
||||
- 8.216.128.0/18
|
||||
- 8.216.148.0/24
|
||||
- 8.216.192.0/18
|
||||
- 8.216.64.0/18
|
||||
- 8.216.69.0/24
|
||||
- 8.216.74.0/24
|
||||
- 8.217.0.0/16
|
||||
- 8.217.0.0/17
|
||||
- 8.217.128.0/17
|
||||
- 8.218.0.0/16
|
||||
- 8.218.0.0/17
|
||||
- 8.218.128.0/17
|
||||
- 8.219.0.0/16
|
||||
- 8.219.0.0/17
|
||||
- 8.219.128.0/17
|
||||
- 8.219.40.0/21
|
||||
- 8.220.116.0/24
|
||||
- 8.220.128.0/18
|
||||
- 8.220.128.0/19
|
||||
- 8.220.147.0/24
|
||||
- 8.220.160.0/19
|
||||
- 8.220.192.0/18
|
||||
- 8.220.192.0/19
|
||||
- 8.220.224.0/19
|
||||
- 8.220.229.0/24
|
||||
- 8.220.64.0/18
|
||||
- 8.220.64.0/19
|
||||
- 8.220.96.0/19
|
||||
- 8.221.0.0/17
|
||||
- 8.221.0.0/18
|
||||
- 8.221.0.0/21
|
||||
- 8.221.128.0/17
|
||||
- 8.221.128.0/18
|
||||
- 8.221.184.0/22
|
||||
- 8.221.188.0/22
|
||||
- 8.221.192.0/18
|
||||
- 8.221.192.0/21
|
||||
- 8.221.200.0/21
|
||||
- 8.221.208.0/21
|
||||
- 8.221.216.0/21
|
||||
- 8.221.48.0/21
|
||||
- 8.221.56.0/21
|
||||
- 8.221.64.0/18
|
||||
- 8.221.8.0/21
|
||||
- 8.222.0.0/20
|
||||
- 8.222.0.0/21
|
||||
- 8.222.112.0/20
|
||||
- 8.222.128.0/17
|
||||
- 8.222.128.0/18
|
||||
- 8.222.16.0/20
|
||||
- 8.222.16.0/21
|
||||
- 8.222.192.0/18
|
||||
- 8.222.24.0/21
|
||||
- 8.222.32.0/20
|
||||
- 8.222.32.0/21
|
||||
- 8.222.40.0/21
|
||||
- 8.222.48.0/20
|
||||
- 8.222.48.0/21
|
||||
- 8.222.56.0/21
|
||||
- 8.222.64.0/20
|
||||
- 8.222.64.0/21
|
||||
- 8.222.72.0/21
|
||||
- 8.222.8.0/21
|
||||
- 8.222.80.0/20
|
||||
- 8.222.80.0/21
|
||||
- 8.222.88.0/21
|
||||
- 8.222.96.0/19
|
||||
- 8.222.96.0/20
|
||||
- 8.223.0.0/17
|
||||
- 8.223.0.0/18
|
||||
- 8.223.128.0/17
|
||||
- 8.223.128.0/18
|
||||
- 8.223.192.0/18
|
||||
- 8.223.64.0/18
|
||||
@@ -1,617 +0,0 @@
|
||||
- name: huawei-cloud
|
||||
action: DENY
|
||||
# Updated 2025-08-20 from IP addresses for AS136907
|
||||
remote_addresses:
|
||||
- 1.178.32.0/20
|
||||
- 1.178.48.0/20
|
||||
- 101.44.0.0/20
|
||||
- 101.44.144.0/20
|
||||
- 101.44.16.0/20
|
||||
- 101.44.160.0/20
|
||||
- 101.44.173.0/24
|
||||
- 101.44.176.0/20
|
||||
- 101.44.192.0/20
|
||||
- 101.44.208.0/22
|
||||
- 101.44.212.0/22
|
||||
- 101.44.216.0/22
|
||||
- 101.44.220.0/22
|
||||
- 101.44.224.0/22
|
||||
- 101.44.228.0/22
|
||||
- 101.44.232.0/22
|
||||
- 101.44.236.0/22
|
||||
- 101.44.240.0/22
|
||||
- 101.44.244.0/22
|
||||
- 101.44.248.0/22
|
||||
- 101.44.252.0/24
|
||||
- 101.44.253.0/24
|
||||
- 101.44.254.0/24
|
||||
- 101.44.255.0/24
|
||||
- 101.44.32.0/20
|
||||
- 101.44.48.0/20
|
||||
- 101.44.64.0/20
|
||||
- 101.44.80.0/20
|
||||
- 101.44.96.0/20
|
||||
- 101.46.0.0/20
|
||||
- 101.46.128.0/21
|
||||
- 101.46.136.0/21
|
||||
- 101.46.144.0/21
|
||||
- 101.46.152.0/21
|
||||
- 101.46.160.0/21
|
||||
- 101.46.168.0/21
|
||||
- 101.46.176.0/21
|
||||
- 101.46.184.0/21
|
||||
- 101.46.192.0/21
|
||||
- 101.46.200.0/21
|
||||
- 101.46.208.0/21
|
||||
- 101.46.216.0/21
|
||||
- 101.46.224.0/22
|
||||
- 101.46.232.0/22
|
||||
- 101.46.236.0/22
|
||||
- 101.46.240.0/22
|
||||
- 101.46.244.0/22
|
||||
- 101.46.248.0/22
|
||||
- 101.46.252.0/24
|
||||
- 101.46.253.0/24
|
||||
- 101.46.254.0/24
|
||||
- 101.46.255.0/24
|
||||
- 101.46.32.0/20
|
||||
- 101.46.48.0/20
|
||||
- 101.46.64.0/20
|
||||
- 101.46.80.0/20
|
||||
- 103.198.203.0/24
|
||||
- 103.215.0.0/24
|
||||
- 103.215.1.0/24
|
||||
- 103.215.3.0/24
|
||||
- 103.240.156.0/22
|
||||
- 103.240.157.0/24
|
||||
- 103.255.60.0/22
|
||||
- 103.255.60.0/24
|
||||
- 103.255.61.0/24
|
||||
- 103.255.62.0/24
|
||||
- 103.255.63.0/24
|
||||
- 103.40.100.0/23
|
||||
- 103.84.110.0/24
|
||||
- 110.238.100.0/22
|
||||
- 110.238.104.0/21
|
||||
- 110.238.112.0/21
|
||||
- 110.238.120.0/22
|
||||
- 110.238.124.0/22
|
||||
- 110.238.64.0/21
|
||||
- 110.238.72.0/21
|
||||
- 110.238.80.0/20
|
||||
- 110.238.96.0/24
|
||||
- 110.238.98.0/24
|
||||
- 110.238.99.0/24
|
||||
- 110.239.127.0/24
|
||||
- 110.239.184.0/22
|
||||
- 110.239.188.0/23
|
||||
- 110.239.190.0/23
|
||||
- 110.239.64.0/19
|
||||
- 110.239.96.0/19
|
||||
- 110.41.208.0/24
|
||||
- 110.41.209.0/24
|
||||
- 110.41.210.0/24
|
||||
- 111.119.192.0/20
|
||||
- 111.119.208.0/20
|
||||
- 111.119.224.0/20
|
||||
- 111.119.240.0/20
|
||||
- 111.91.0.0/20
|
||||
- 111.91.112.0/20
|
||||
- 111.91.16.0/20
|
||||
- 111.91.32.0/20
|
||||
- 111.91.48.0/20
|
||||
- 111.91.64.0/20
|
||||
- 111.91.80.0/20
|
||||
- 111.91.96.0/20
|
||||
- 114.119.128.0/19
|
||||
- 114.119.160.0/21
|
||||
- 114.119.168.0/24
|
||||
- 114.119.169.0/24
|
||||
- 114.119.170.0/24
|
||||
- 114.119.171.0/24
|
||||
- 114.119.172.0/22
|
||||
- 114.119.176.0/20
|
||||
- 115.30.32.0/20
|
||||
- 115.30.48.0/20
|
||||
- 119.12.160.0/20
|
||||
- 119.13.112.0/20
|
||||
- 119.13.160.0/24
|
||||
- 119.13.161.0/24
|
||||
- 119.13.162.0/23
|
||||
- 119.13.163.0/24
|
||||
- 119.13.164.0/22
|
||||
- 119.13.168.0/21
|
||||
- 119.13.168.0/24
|
||||
- 119.13.169.0/24
|
||||
- 119.13.170.0/24
|
||||
- 119.13.172.0/24
|
||||
- 119.13.173.0/24
|
||||
- 119.13.32.0/22
|
||||
- 119.13.36.0/22
|
||||
- 119.13.64.0/24
|
||||
- 119.13.65.0/24
|
||||
- 119.13.66.0/23
|
||||
- 119.13.68.0/22
|
||||
- 119.13.72.0/22
|
||||
- 119.13.76.0/22
|
||||
- 119.13.80.0/21
|
||||
- 119.13.88.0/22
|
||||
- 119.13.92.0/22
|
||||
- 119.13.96.0/20
|
||||
- 119.8.0.0/21
|
||||
- 119.8.128.0/24
|
||||
- 119.8.129.0/24
|
||||
- 119.8.130.0/23
|
||||
- 119.8.132.0/22
|
||||
- 119.8.136.0/21
|
||||
- 119.8.144.0/20
|
||||
- 119.8.160.0/19
|
||||
- 119.8.18.0/24
|
||||
- 119.8.192.0/20
|
||||
- 119.8.192.0/21
|
||||
- 119.8.200.0/21
|
||||
- 119.8.208.0/20
|
||||
- 119.8.21.0/24
|
||||
- 119.8.22.0/24
|
||||
- 119.8.224.0/24
|
||||
- 119.8.227.0/24
|
||||
- 119.8.228.0/22
|
||||
- 119.8.23.0/24
|
||||
- 119.8.232.0/21
|
||||
- 119.8.24.0/21
|
||||
- 119.8.240.0/23
|
||||
- 119.8.242.0/23
|
||||
- 119.8.244.0/24
|
||||
- 119.8.245.0/24
|
||||
- 119.8.246.0/24
|
||||
- 119.8.247.0/24
|
||||
- 119.8.248.0/24
|
||||
- 119.8.249.0/24
|
||||
- 119.8.250.0/24
|
||||
- 119.8.253.0/24
|
||||
- 119.8.254.0/23
|
||||
- 119.8.32.0/19
|
||||
- 119.8.4.0/24
|
||||
- 119.8.64.0/22
|
||||
- 119.8.68.0/24
|
||||
- 119.8.69.0/24
|
||||
- 119.8.70.0/24
|
||||
- 119.8.71.0/24
|
||||
- 119.8.72.0/21
|
||||
- 119.8.8.0/21
|
||||
- 119.8.80.0/20
|
||||
- 119.8.96.0/19
|
||||
- 121.91.152.0/21
|
||||
- 121.91.168.0/21
|
||||
- 121.91.200.0/21
|
||||
- 121.91.200.0/24
|
||||
- 121.91.201.0/24
|
||||
- 121.91.204.0/24
|
||||
- 121.91.205.0/24
|
||||
- 122.8.128.0/20
|
||||
- 122.8.144.0/20
|
||||
- 122.8.160.0/20
|
||||
- 122.8.176.0/21
|
||||
- 122.8.184.0/22
|
||||
- 122.8.188.0/22
|
||||
- 124.243.128.0/18
|
||||
- 124.243.156.0/24
|
||||
- 124.243.157.0/24
|
||||
- 124.243.158.0/24
|
||||
- 124.243.159.0/24
|
||||
- 124.71.248.0/24
|
||||
- 124.71.249.0/24
|
||||
- 124.71.250.0/24
|
||||
- 124.71.252.0/24
|
||||
- 124.71.253.0/24
|
||||
- 124.81.0.0/20
|
||||
- 124.81.112.0/20
|
||||
- 124.81.128.0/20
|
||||
- 124.81.144.0/20
|
||||
- 124.81.16.0/20
|
||||
- 124.81.160.0/20
|
||||
- 124.81.176.0/20
|
||||
- 124.81.192.0/20
|
||||
- 124.81.208.0/20
|
||||
- 124.81.224.0/20
|
||||
- 124.81.240.0/20
|
||||
- 124.81.32.0/20
|
||||
- 124.81.48.0/20
|
||||
- 124.81.64.0/20
|
||||
- 124.81.80.0/20
|
||||
- 124.81.96.0/20
|
||||
- 139.9.98.0/24
|
||||
- 139.9.99.0/24
|
||||
- 14.137.132.0/22
|
||||
- 14.137.136.0/22
|
||||
- 14.137.140.0/22
|
||||
- 14.137.152.0/24
|
||||
- 14.137.153.0/24
|
||||
- 14.137.154.0/24
|
||||
- 14.137.155.0/24
|
||||
- 14.137.156.0/24
|
||||
- 14.137.157.0/24
|
||||
- 14.137.161.0/24
|
||||
- 14.137.163.0/24
|
||||
- 14.137.169.0/24
|
||||
- 14.137.170.0/23
|
||||
- 14.137.172.0/22
|
||||
- 146.174.128.0/20
|
||||
- 146.174.144.0/20
|
||||
- 146.174.160.0/20
|
||||
- 146.174.176.0/20
|
||||
- 148.145.160.0/20
|
||||
- 148.145.192.0/20
|
||||
- 148.145.208.0/20
|
||||
- 148.145.224.0/23
|
||||
- 148.145.234.0/23
|
||||
- 148.145.236.0/23
|
||||
- 148.145.238.0/23
|
||||
- 149.232.128.0/20
|
||||
- 149.232.144.0/20
|
||||
- 150.40.128.0/20
|
||||
- 150.40.144.0/20
|
||||
- 150.40.160.0/20
|
||||
- 150.40.176.0/20
|
||||
- 150.40.182.0/24
|
||||
- 150.40.192.0/20
|
||||
- 150.40.208.0/20
|
||||
- 150.40.224.0/20
|
||||
- 150.40.240.0/20
|
||||
- 154.220.192.0/19
|
||||
- 154.81.16.0/20
|
||||
- 154.83.0.0/23
|
||||
- 154.86.32.0/20
|
||||
- 154.86.48.0/20
|
||||
- 154.93.100.0/23
|
||||
- 154.93.104.0/23
|
||||
- 156.227.22.0/23
|
||||
- 156.230.32.0/21
|
||||
- 156.230.40.0/21
|
||||
- 156.230.64.0/18
|
||||
- 156.232.16.0/20
|
||||
- 156.240.128.0/18
|
||||
- 156.249.32.0/20
|
||||
- 156.253.16.0/20
|
||||
- 157.254.211.0/24
|
||||
- 157.254.212.0/24
|
||||
- 159.138.0.0/20
|
||||
- 159.138.112.0/21
|
||||
- 159.138.114.0/24
|
||||
- 159.138.120.0/22
|
||||
- 159.138.124.0/24
|
||||
- 159.138.125.0/24
|
||||
- 159.138.126.0/23
|
||||
- 159.138.128.0/20
|
||||
- 159.138.144.0/20
|
||||
- 159.138.152.0/21
|
||||
- 159.138.16.0/22
|
||||
- 159.138.160.0/20
|
||||
- 159.138.176.0/23
|
||||
- 159.138.178.0/24
|
||||
- 159.138.179.0/24
|
||||
- 159.138.180.0/24
|
||||
- 159.138.181.0/24
|
||||
- 159.138.182.0/23
|
||||
- 159.138.188.0/23
|
||||
- 159.138.190.0/23
|
||||
- 159.138.192.0/20
|
||||
- 159.138.20.0/22
|
||||
- 159.138.208.0/21
|
||||
- 159.138.216.0/22
|
||||
- 159.138.220.0/23
|
||||
- 159.138.224.0/20
|
||||
- 159.138.24.0/21
|
||||
- 159.138.240.0/20
|
||||
- 159.138.32.0/20
|
||||
- 159.138.48.0/20
|
||||
- 159.138.64.0/21
|
||||
- 159.138.67.0/24
|
||||
- 159.138.76.0/24
|
||||
- 159.138.77.0/24
|
||||
- 159.138.78.0/24
|
||||
- 159.138.79.0/24
|
||||
- 159.138.80.0/20
|
||||
- 159.138.96.0/20
|
||||
- 166.108.192.0/20
|
||||
- 166.108.208.0/20
|
||||
- 166.108.224.0/20
|
||||
- 166.108.240.0/20
|
||||
- 176.52.128.0/20
|
||||
- 176.52.144.0/20
|
||||
- 180.87.192.0/20
|
||||
- 180.87.208.0/20
|
||||
- 180.87.224.0/20
|
||||
- 180.87.240.0/20
|
||||
- 182.160.0.0/20
|
||||
- 182.160.16.0/24
|
||||
- 182.160.17.0/24
|
||||
- 182.160.18.0/23
|
||||
- 182.160.20.0/22
|
||||
- 182.160.20.0/24
|
||||
- 182.160.24.0/21
|
||||
- 182.160.36.0/22
|
||||
- 182.160.49.0/24
|
||||
- 182.160.52.0/22
|
||||
- 182.160.56.0/21
|
||||
- 182.160.56.0/24
|
||||
- 182.160.57.0/24
|
||||
- 182.160.58.0/24
|
||||
- 182.160.59.0/24
|
||||
- 182.160.60.0/24
|
||||
- 182.160.61.0/24
|
||||
- 182.160.62.0/24
|
||||
- 183.87.112.0/20
|
||||
- 183.87.128.0/20
|
||||
- 183.87.144.0/20
|
||||
- 183.87.32.0/20
|
||||
- 183.87.48.0/20
|
||||
- 183.87.64.0/20
|
||||
- 183.87.80.0/20
|
||||
- 183.87.96.0/20
|
||||
- 188.119.192.0/20
|
||||
- 188.119.208.0/20
|
||||
- 188.119.224.0/20
|
||||
- 188.119.240.0/20
|
||||
- 188.239.0.0/20
|
||||
- 188.239.16.0/20
|
||||
- 188.239.32.0/20
|
||||
- 188.239.48.0/20
|
||||
- 189.1.192.0/20
|
||||
- 189.1.208.0/20
|
||||
- 189.1.224.0/20
|
||||
- 189.1.240.0/20
|
||||
- 189.28.112.0/20
|
||||
- 189.28.96.0/20
|
||||
- 190.92.192.0/19
|
||||
- 190.92.224.0/19
|
||||
- 190.92.248.0/24
|
||||
- 190.92.252.0/24
|
||||
- 190.92.253.0/24
|
||||
- 190.92.254.0/24
|
||||
- 201.77.32.0/20
|
||||
- 202.170.88.0/21
|
||||
- 202.76.128.0/20
|
||||
- 202.76.144.0/20
|
||||
- 202.76.160.0/20
|
||||
- 202.76.176.0/20
|
||||
- 203.123.80.0/20
|
||||
- 203.167.20.0/23
|
||||
- 203.167.22.0/24
|
||||
- 212.34.192.0/20
|
||||
- 212.34.208.0/20
|
||||
- 213.250.128.0/20
|
||||
- 213.250.144.0/20
|
||||
- 213.250.160.0/20
|
||||
- 213.250.176.0/21
|
||||
- 213.250.184.0/21
|
||||
- 219.83.0.0/20
|
||||
- 219.83.112.0/22
|
||||
- 219.83.116.0/23
|
||||
- 219.83.118.0/23
|
||||
- 219.83.121.0/24
|
||||
- 219.83.122.0/24
|
||||
- 219.83.123.0/24
|
||||
- 219.83.124.0/24
|
||||
- 219.83.16.0/20
|
||||
- 219.83.32.0/20
|
||||
- 219.83.76.0/23
|
||||
- 2404:a140:43::/48
|
||||
- 2405:f080::/39
|
||||
- 2405:f080:1::/48
|
||||
- 2405:f080:1000::/39
|
||||
- 2405:f080:1200::/39
|
||||
- 2405:f080:1400::/48
|
||||
- 2405:f080:1401::/48
|
||||
- 2405:f080:1402::/48
|
||||
- 2405:f080:1403::/48
|
||||
- 2405:f080:1500::/40
|
||||
- 2405:f080:1600::/48
|
||||
- 2405:f080:1602::/48
|
||||
- 2405:f080:1603::/48
|
||||
- 2405:f080:1800::/39
|
||||
- 2405:f080:1800::/44
|
||||
- 2405:f080:1810::/48
|
||||
- 2405:f080:1811::/48
|
||||
- 2405:f080:1812::/48
|
||||
- 2405:f080:1813::/48
|
||||
- 2405:f080:1814::/48
|
||||
- 2405:f080:1815::/48
|
||||
- 2405:f080:1900::/40
|
||||
- 2405:f080:1e02::/47
|
||||
- 2405:f080:1e04::/47
|
||||
- 2405:f080:1e06::/47
|
||||
- 2405:f080:1e1e::/47
|
||||
- 2405:f080:1e20::/47
|
||||
- 2405:f080:200::/48
|
||||
- 2405:f080:2000::/39
|
||||
- 2405:f080:201::/48
|
||||
- 2405:f080:202::/48
|
||||
- 2405:f080:2040::/48
|
||||
- 2405:f080:2200::/39
|
||||
- 2405:f080:2280::/48
|
||||
- 2405:f080:2281::/48
|
||||
- 2405:f080:2282::/48
|
||||
- 2405:f080:2283::/48
|
||||
- 2405:f080:2284::/48
|
||||
- 2405:f080:2285::/48
|
||||
- 2405:f080:2286::/48
|
||||
- 2405:f080:2287::/48
|
||||
- 2405:f080:2288::/48
|
||||
- 2405:f080:2289::/48
|
||||
- 2405:f080:228a::/48
|
||||
- 2405:f080:228b::/48
|
||||
- 2405:f080:228c::/48
|
||||
- 2405:f080:228d::/48
|
||||
- 2405:f080:228e::/48
|
||||
- 2405:f080:228f::/48
|
||||
- 2405:f080:2400::/39
|
||||
- 2405:f080:2600::/39
|
||||
- 2405:f080:2800::/48
|
||||
- 2405:f080:2a00::/48
|
||||
- 2405:f080:2e00::/47
|
||||
- 2405:f080:3000::/38
|
||||
- 2405:f080:3000::/40
|
||||
- 2405:f080:3100::/40
|
||||
- 2405:f080:3200::/48
|
||||
- 2405:f080:3201::/48
|
||||
- 2405:f080:3202::/48
|
||||
- 2405:f080:3203::/48
|
||||
- 2405:f080:3204::/48
|
||||
- 2405:f080:3205::/48
|
||||
- 2405:f080:3400::/38
|
||||
- 2405:f080:3400::/40
|
||||
- 2405:f080:3500::/40
|
||||
- 2405:f080:3600::/48
|
||||
- 2405:f080:3601::/48
|
||||
- 2405:f080:3602::/48
|
||||
- 2405:f080:3603::/48
|
||||
- 2405:f080:3604::/48
|
||||
- 2405:f080:3605::/48
|
||||
- 2405:f080:400::/39
|
||||
- 2405:f080:4000::/40
|
||||
- 2405:f080:4100::/48
|
||||
- 2405:f080:4102::/48
|
||||
- 2405:f080:4103::/48
|
||||
- 2405:f080:4104::/48
|
||||
- 2405:f080:4200::/40
|
||||
- 2405:f080:4300::/40
|
||||
- 2405:f080:600::/48
|
||||
- 2405:f080:800::/40
|
||||
- 2405:f080:810::/44
|
||||
- 2405:f080:a00::/39
|
||||
- 2405:f080:a11::/48
|
||||
- 2405:f080:e02::/48
|
||||
- 2405:f080:e03::/48
|
||||
- 2405:f080:e04::/47
|
||||
- 2405:f080:e05::/48
|
||||
- 2405:f080:e06::/48
|
||||
- 2405:f080:e07::/48
|
||||
- 2405:f080:e0e::/47
|
||||
- 2405:f080:e10::/47
|
||||
- 2405:f080:edff::/48
|
||||
- 27.106.0.0/20
|
||||
- 27.106.112.0/20
|
||||
- 27.106.16.0/20
|
||||
- 27.106.32.0/20
|
||||
- 27.106.48.0/20
|
||||
- 27.106.64.0/20
|
||||
- 27.106.80.0/20
|
||||
- 27.106.96.0/20
|
||||
- 27.255.0.0/23
|
||||
- 27.255.10.0/23
|
||||
- 27.255.12.0/23
|
||||
- 27.255.14.0/23
|
||||
- 27.255.16.0/23
|
||||
- 27.255.18.0/23
|
||||
- 27.255.2.0/23
|
||||
- 27.255.20.0/23
|
||||
- 27.255.22.0/23
|
||||
- 27.255.26.0/23
|
||||
- 27.255.28.0/23
|
||||
- 27.255.30.0/23
|
||||
- 27.255.32.0/23
|
||||
- 27.255.34.0/23
|
||||
- 27.255.36.0/23
|
||||
- 27.255.38.0/23
|
||||
- 27.255.4.0/23
|
||||
- 27.255.40.0/23
|
||||
- 27.255.42.0/23
|
||||
- 27.255.44.0/23
|
||||
- 27.255.46.0/23
|
||||
- 27.255.48.0/23
|
||||
- 27.255.50.0/23
|
||||
- 27.255.52.0/23
|
||||
- 27.255.54.0/23
|
||||
- 27.255.58.0/23
|
||||
- 27.255.6.0/23
|
||||
- 27.255.60.0/23
|
||||
- 27.255.62.0/23
|
||||
- 27.255.8.0/23
|
||||
- 42.201.128.0/20
|
||||
- 42.201.144.0/20
|
||||
- 42.201.160.0/20
|
||||
- 42.201.176.0/20
|
||||
- 42.201.192.0/20
|
||||
- 42.201.208.0/20
|
||||
- 42.201.224.0/20
|
||||
- 42.201.240.0/20
|
||||
- 43.225.140.0/22
|
||||
- 43.255.104.0/22
|
||||
- 45.194.104.0/21
|
||||
- 45.199.144.0/22
|
||||
- 45.202.128.0/19
|
||||
- 45.202.160.0/20
|
||||
- 45.202.176.0/21
|
||||
- 45.202.184.0/21
|
||||
- 45.203.40.0/21
|
||||
- 46.250.160.0/20
|
||||
- 46.250.176.0/20
|
||||
- 49.0.192.0/21
|
||||
- 49.0.200.0/21
|
||||
- 49.0.224.0/22
|
||||
- 49.0.228.0/22
|
||||
- 49.0.232.0/21
|
||||
- 49.0.240.0/20
|
||||
- 62.245.0.0/20
|
||||
- 62.245.16.0/20
|
||||
- 80.238.128.0/22
|
||||
- 80.238.132.0/22
|
||||
- 80.238.136.0/22
|
||||
- 80.238.140.0/22
|
||||
- 80.238.144.0/22
|
||||
- 80.238.148.0/22
|
||||
- 80.238.152.0/22
|
||||
- 80.238.156.0/22
|
||||
- 80.238.164.0/22
|
||||
- 80.238.164.0/24
|
||||
- 80.238.165.0/24
|
||||
- 80.238.168.0/22
|
||||
- 80.238.168.0/24
|
||||
- 80.238.169.0/24
|
||||
- 80.238.170.0/24
|
||||
- 80.238.171.0/24
|
||||
- 80.238.172.0/22
|
||||
- 80.238.176.0/22
|
||||
- 80.238.180.0/24
|
||||
- 80.238.181.0/24
|
||||
- 80.238.183.0/24
|
||||
- 80.238.184.0/24
|
||||
- 80.238.185.0/24
|
||||
- 80.238.186.0/24
|
||||
- 80.238.190.0/24
|
||||
- 80.238.192.0/20
|
||||
- 80.238.208.0/20
|
||||
- 80.238.224.0/20
|
||||
- 80.238.240.0/20
|
||||
- 83.101.0.0/21
|
||||
- 83.101.104.0/21
|
||||
- 83.101.16.0/21
|
||||
- 83.101.24.0/21
|
||||
- 83.101.32.0/21
|
||||
- 83.101.48.0/21
|
||||
- 83.101.56.0/23
|
||||
- 83.101.58.0/23
|
||||
- 83.101.64.0/21
|
||||
- 83.101.72.0/21
|
||||
- 83.101.8.0/23
|
||||
- 83.101.80.0/21
|
||||
- 83.101.88.0/24
|
||||
- 83.101.89.0/24
|
||||
- 83.101.96.0/21
|
||||
- 87.119.12.0/24
|
||||
- 89.150.192.0/20
|
||||
- 89.150.208.0/20
|
||||
- 94.244.128.0/20
|
||||
- 94.244.144.0/20
|
||||
- 94.244.160.0/20
|
||||
- 94.244.176.0/20
|
||||
- 94.45.160.0/19
|
||||
- 94.45.160.0/24
|
||||
- 94.45.161.0/24
|
||||
- 94.45.163.0/24
|
||||
- 94.74.112.0/21
|
||||
- 94.74.120.0/21
|
||||
- 94.74.64.0/20
|
||||
- 94.74.80.0/20
|
||||
- 94.74.96.0/20
|
||||
@@ -1,165 +0,0 @@
|
||||
# Tencent Cloud crawler IP ranges
|
||||
- name: tencent-cloud
|
||||
action: DENY
|
||||
remote_addresses:
|
||||
- 101.32.0.0/17
|
||||
- 101.32.176.0/20
|
||||
- 101.32.192.0/18
|
||||
- 101.33.116.0/22
|
||||
- 101.33.120.0/21
|
||||
- 101.33.16.0/20
|
||||
- 101.33.2.0/23
|
||||
- 101.33.32.0/19
|
||||
- 101.33.4.0/22
|
||||
- 101.33.64.0/19
|
||||
- 101.33.8.0/21
|
||||
- 101.33.96.0/20
|
||||
- 119.28.28.0/24
|
||||
- 119.29.29.0/24
|
||||
- 124.156.0.0/16
|
||||
- 129.226.0.0/18
|
||||
- 129.226.128.0/18
|
||||
- 129.226.224.0/19
|
||||
- 129.226.96.0/19
|
||||
- 150.109.0.0/18
|
||||
- 150.109.128.0/20
|
||||
- 150.109.160.0/19
|
||||
- 150.109.192.0/18
|
||||
- 150.109.64.0/20
|
||||
- 150.109.80.0/21
|
||||
- 150.109.88.0/22
|
||||
- 150.109.96.0/19
|
||||
- 162.14.60.0/22
|
||||
- 162.62.0.0/18
|
||||
- 162.62.128.0/20
|
||||
- 162.62.144.0/21
|
||||
- 162.62.152.0/22
|
||||
- 162.62.172.0/22
|
||||
- 162.62.176.0/20
|
||||
- 162.62.192.0/19
|
||||
- 162.62.255.0/24
|
||||
- 162.62.80.0/20
|
||||
- 162.62.96.0/19
|
||||
- 170.106.0.0/16
|
||||
- 43.128.0.0/14
|
||||
- 43.132.0.0/22
|
||||
- 43.132.12.0/22
|
||||
- 43.132.128.0/17
|
||||
- 43.132.16.0/22
|
||||
- 43.132.28.0/22
|
||||
- 43.132.32.0/22
|
||||
- 43.132.40.0/22
|
||||
- 43.132.52.0/22
|
||||
- 43.132.60.0/24
|
||||
- 43.132.64.0/22
|
||||
- 43.132.69.0/24
|
||||
- 43.132.70.0/23
|
||||
- 43.132.72.0/21
|
||||
- 43.132.80.0/21
|
||||
- 43.132.88.0/22
|
||||
- 43.132.92.0/23
|
||||
- 43.132.96.0/19
|
||||
- 43.133.0.0/16
|
||||
- 43.134.0.0/16
|
||||
- 43.135.0.0/17
|
||||
- 43.135.128.0/18
|
||||
- 43.135.192.0/19
|
||||
- 43.152.0.0/21
|
||||
- 43.152.11.0/24
|
||||
- 43.152.12.0/22
|
||||
- 43.152.128.0/22
|
||||
- 43.152.133.0/24
|
||||
- 43.152.134.0/23
|
||||
- 43.152.136.0/21
|
||||
- 43.152.144.0/20
|
||||
- 43.152.160.0/22
|
||||
- 43.152.16.0/21
|
||||
- 43.152.164.0/23
|
||||
- 43.152.166.0/24
|
||||
- 43.152.168.0/21
|
||||
- 43.152.178.0/23
|
||||
- 43.152.180.0/22
|
||||
- 43.152.184.0/21
|
||||
- 43.152.192.0/18
|
||||
- 43.152.24.0/22
|
||||
- 43.152.31.0/24
|
||||
- 43.152.32.0/23
|
||||
- 43.152.35.0/24
|
||||
- 43.152.36.0/22
|
||||
- 43.152.40.0/21
|
||||
- 43.152.48.0/20
|
||||
- 43.152.74.0/23
|
||||
- 43.152.76.0/22
|
||||
- 43.152.80.0/22
|
||||
- 43.152.8.0/23
|
||||
- 43.152.92.0/23
|
||||
- 43.153.0.0/16
|
||||
- 43.154.0.0/15
|
||||
- 43.156.0.0/15
|
||||
- 43.158.0.0/16
|
||||
- 43.159.0.0/20
|
||||
- 43.159.128.0/17
|
||||
- 43.159.64.0/23
|
||||
- 43.159.70.0/23
|
||||
- 43.159.72.0/21
|
||||
- 43.159.81.0/24
|
||||
- 43.159.82.0/23
|
||||
- 43.159.85.0/24
|
||||
- 43.159.86.0/23
|
||||
- 43.159.88.0/21
|
||||
- 43.159.96.0/19
|
||||
- 43.160.0.0/15
|
||||
- 43.162.0.0/16
|
||||
- 43.163.0.0/17
|
||||
- 43.163.128.0/18
|
||||
- 43.163.192.255/32
|
||||
- 43.163.193.0/24
|
||||
- 43.163.194.0/23
|
||||
- 43.163.196.0/22
|
||||
- 43.163.200.0/21
|
||||
- 43.163.208.0/20
|
||||
- 43.163.224.0/19
|
||||
- 43.164.0.0/18
|
||||
- 43.164.128.0/17
|
||||
- 43.165.0.0/16
|
||||
- 43.166.128.0/18
|
||||
- 43.166.224.0/19
|
||||
- 43.168.0.0/20
|
||||
- 43.168.16.0/21
|
||||
- 43.168.24.0/22
|
||||
- 43.168.255.0/24
|
||||
- 43.168.32.0/19
|
||||
- 43.168.64.0/20
|
||||
- 43.168.80.0/22
|
||||
- 43.169.0.0/16
|
||||
- 43.170.0.0/16
|
||||
- 43.174.0.0/18
|
||||
- 43.174.128.0/17
|
||||
- 43.174.64.0/22
|
||||
- 43.174.68.0/23
|
||||
- 43.174.71.0/24
|
||||
- 43.174.74.0/23
|
||||
- 43.174.76.0/22
|
||||
- 43.174.80.0/20
|
||||
- 43.174.96.0/19
|
||||
- 43.175.0.0/20
|
||||
- 43.175.113.0/24
|
||||
- 43.175.114.0/23
|
||||
- 43.175.116.0/22
|
||||
- 43.175.120.0/21
|
||||
- 43.175.128.0/18
|
||||
- 43.175.16.0/22
|
||||
- 43.175.192.0/20
|
||||
- 43.175.20.0/23
|
||||
- 43.175.208.0/21
|
||||
- 43.175.216.0/22
|
||||
- 43.175.220.0/23
|
||||
- 43.175.22.0/24
|
||||
- 43.175.222.0/24
|
||||
- 43.175.224.0/20
|
||||
- 43.175.25.0/24
|
||||
- 43.175.26.0/23
|
||||
- 43.175.28.0/22
|
||||
- 43.175.32.0/19
|
||||
- 43.175.64.0/19
|
||||
- 43.175.96.0/20
|
||||
@@ -1,6 +0,0 @@
|
||||
- name: yandexbot
|
||||
action: ALLOW
|
||||
expression:
|
||||
all:
|
||||
- userAgent.matches("\\+http\\://yandex\\.com/bots")
|
||||
- verifyFCrDNS(remoteAddress, "^.*\\.yandex\\.(ru|com|net)$")
|
||||
@@ -3,6 +3,6 @@ package data
|
||||
import "embed"
|
||||
|
||||
var (
|
||||
//go:embed botPolicies.yaml all:apps all:bots all:clients all:common all:crawlers all:meta all:services
|
||||
//go:embed botPolicies.yaml botPolicies.json all:apps all:bots all:clients all:common all:crawlers all:meta
|
||||
BotPolicies embed.FS
|
||||
)
|
||||
|
||||
@@ -1,38 +0,0 @@
|
||||
package data
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// TestBotPoliciesEmbed ensures all YAML files in the directory tree
|
||||
// are accessible in the embedded BotPolicies filesystem.
|
||||
func TestBotPoliciesEmbed(t *testing.T) {
|
||||
yamlFiles, err := filepath.Glob("./**/*.yaml")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to glob YAML files: %v", err)
|
||||
}
|
||||
|
||||
if len(yamlFiles) == 0 {
|
||||
t.Fatal("No YAML files found in directory tree")
|
||||
}
|
||||
|
||||
t.Logf("Found %d YAML files to verify", len(yamlFiles))
|
||||
|
||||
for _, filePath := range yamlFiles {
|
||||
embeddedPath := strings.TrimPrefix(filePath, "./")
|
||||
|
||||
t.Run(embeddedPath, func(t *testing.T) {
|
||||
content, err := BotPolicies.ReadFile(embeddedPath)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to read %s from embedded filesystem: %v", embeddedPath, err)
|
||||
return
|
||||
}
|
||||
|
||||
if len(content) == 0 {
|
||||
t.Errorf("File %s exists in embedded filesystem but is empty", embeddedPath)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1,135 +0,0 @@
|
||||
- # Pathological bots to deny
|
||||
# This correlates to data/bots/_deny-pathological.yaml in the source tree
|
||||
# https://github.com/TecharoHQ/anubis/blob/main/data/bots/_deny-pathological.yaml
|
||||
import: (data)/bots/_deny-pathological.yaml
|
||||
- import: (data)/bots/aggressive-brazilian-scrapers.yaml
|
||||
|
||||
# Aggressively block AI/LLM related bots/agents by default
|
||||
- import: (data)/meta/ai-block-aggressive.yaml
|
||||
|
||||
# Consider replacing the aggressive AI policy with more selective policies:
|
||||
# - import: (data)/meta/ai-block-moderate.yaml
|
||||
# - import: (data)/meta/ai-block-permissive.yaml
|
||||
|
||||
# Search engine crawlers to allow, defaults to:
|
||||
# - Google (so they don't try to bypass Anubis)
|
||||
# - Apple
|
||||
# - Bing
|
||||
# - DuckDuckGo
|
||||
# - Qwant
|
||||
# - The Internet Archive
|
||||
# - Kagi
|
||||
# - Marginalia
|
||||
# - Mojeek
|
||||
- import: (data)/crawlers/_allow-good.yaml
|
||||
# Challenge Firefox AI previews
|
||||
- import: (data)/clients/x-firefox-ai.yaml
|
||||
|
||||
# Allow common "keeping the internet working" routes (well-known, favicon, robots.txt)
|
||||
- import: (data)/common/keep-internet-working.yaml
|
||||
|
||||
# # Punish any bot with "bot" in the user-agent string
|
||||
# # This is known to have a high false-positive rate, use at your own risk
|
||||
# - name: generic-bot-catchall
|
||||
# user_agent_regex: (?i:bot|crawler)
|
||||
# action: CHALLENGE
|
||||
# challenge:
|
||||
# difficulty: 16 # impossible
|
||||
# algorithm: slow # intentionally waste CPU cycles and time
|
||||
|
||||
# Requires a subscription to Thoth to use, see
|
||||
# https://anubis.techaro.lol/docs/admin/thoth#geoip-based-filtering
|
||||
- name: countries-with-aggressive-scrapers
|
||||
action: WEIGH
|
||||
geoip:
|
||||
countries:
|
||||
- BR
|
||||
- CN
|
||||
weight:
|
||||
adjust: 10
|
||||
|
||||
# Requires a subscription to Thoth to use, see
|
||||
# https://anubis.techaro.lol/docs/admin/thoth#asn-based-filtering
|
||||
- name: aggressive-asns-without-functional-abuse-contact
|
||||
action: WEIGH
|
||||
asns:
|
||||
match:
|
||||
- 13335 # Cloudflare
|
||||
- 136907 # Huawei Cloud
|
||||
- 45102 # Alibaba Cloud
|
||||
weight:
|
||||
adjust: 10
|
||||
|
||||
# ## System load based checks.
|
||||
# # If the system is under high load, add weight.
|
||||
# - name: high-load-average
|
||||
# action: WEIGH
|
||||
# expression: load_1m >= 10.0 # make sure to end the load comparison in a .0
|
||||
# weight:
|
||||
# adjust: 20
|
||||
|
||||
## If your backend service is running on the same operating system as Anubis,
|
||||
## you can uncomment this rule to make the challenge easier when the system is
|
||||
## under low load.
|
||||
##
|
||||
## If it is not, remove weight.
|
||||
# - name: low-load-average
|
||||
# action: WEIGH
|
||||
# expression: load_15m <= 4.0 # make sure to end the load comparison in a .0
|
||||
# weight:
|
||||
# adjust: -10
|
||||
|
||||
# Assert behaviour that only genuine browsers display. This ensures that Chrome
|
||||
# or Firefox versions
|
||||
- name: realistic-browser-catchall
|
||||
expression:
|
||||
all:
|
||||
- '"User-Agent" in headers'
|
||||
- '( userAgent.contains("Firefox") ) || ( userAgent.contains("Chrome") ) || ( userAgent.contains("Safari") )'
|
||||
- '"Accept" in headers'
|
||||
- '"Sec-Fetch-Dest" in headers'
|
||||
- '"Sec-Fetch-Mode" in headers'
|
||||
- '"Sec-Fetch-Site" in headers'
|
||||
- '"Accept-Encoding" in headers'
|
||||
- '( headers["Accept-Encoding"].contains("zstd") || headers["Accept-Encoding"].contains("br") )'
|
||||
- '"Accept-Language" in headers'
|
||||
action: WEIGH
|
||||
weight:
|
||||
adjust: -10
|
||||
|
||||
# The Upgrade-Insecure-Requests header is typically sent by browsers, but not always
|
||||
- name: upgrade-insecure-requests
|
||||
expression: '"Upgrade-Insecure-Requests" in headers'
|
||||
action: WEIGH
|
||||
weight:
|
||||
adjust: -2
|
||||
|
||||
# Chrome should behave like Chrome
|
||||
- name: chrome-is-proper
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("Chrome")
|
||||
- '"Sec-Ch-Ua" in headers'
|
||||
- 'headers["Sec-Ch-Ua"].contains("Chromium")'
|
||||
- '"Sec-Ch-Ua-Mobile" in headers'
|
||||
- '"Sec-Ch-Ua-Platform" in headers'
|
||||
action: WEIGH
|
||||
weight:
|
||||
adjust: -5
|
||||
|
||||
- name: should-have-accept
|
||||
expression:
|
||||
all:
|
||||
- userAgent.contains("Mozilla")
|
||||
- '!("Accept" in headers)'
|
||||
action: WEIGH
|
||||
weight:
|
||||
adjust: 5
|
||||
|
||||
# Generic catchall rule
|
||||
- name: generic-browser
|
||||
user_agent_regex: >-
|
||||
Mozilla|Opera
|
||||
action: WEIGH
|
||||
weight:
|
||||
adjust: 10
|
||||
@@ -1,2 +0,0 @@
|
||||
- import: (data)/clients/telegram-preview.yaml
|
||||
- import: (data)/clients/vk-preview.yaml
|
||||
@@ -13,13 +13,7 @@ func Zilch[T any]() T {
|
||||
// Impl is a lazy key->value map. It's a wrapper around a map and a mutex. If values exceed their time-to-live, they are pruned at Get time.
|
||||
type Impl[K comparable, V any] struct {
|
||||
data map[K]decayMapEntry[V]
|
||||
|
||||
// deleteCh receives decay-deletion requests from readers.
|
||||
deleteCh chan deleteReq[K]
|
||||
// stopCh stops the background cleanup worker.
|
||||
stopCh chan struct{}
|
||||
wg sync.WaitGroup
|
||||
lock sync.RWMutex
|
||||
lock sync.RWMutex
|
||||
}
|
||||
|
||||
type decayMapEntry[V any] struct {
|
||||
@@ -27,38 +21,30 @@ type decayMapEntry[V any] struct {
|
||||
expiry time.Time
|
||||
}
|
||||
|
||||
// deleteReq is a request to remove a key if its expiry timestamp still matches
|
||||
// the observed one. This prevents racing with concurrent Set updates.
|
||||
type deleteReq[K comparable] struct {
|
||||
key K
|
||||
expiry time.Time
|
||||
}
|
||||
|
||||
// New creates a new DecayMap of key type K and value type V.
|
||||
//
|
||||
// Key types must be comparable to work with maps.
|
||||
func New[K comparable, V any]() *Impl[K, V] {
|
||||
m := &Impl[K, V]{
|
||||
data: make(map[K]decayMapEntry[V]),
|
||||
deleteCh: make(chan deleteReq[K], 1024),
|
||||
stopCh: make(chan struct{}),
|
||||
return &Impl[K, V]{
|
||||
data: make(map[K]decayMapEntry[V]),
|
||||
}
|
||||
m.wg.Add(1)
|
||||
go m.cleanupWorker()
|
||||
return m
|
||||
}
|
||||
|
||||
// expire forcibly expires a key by setting its time-to-live one second in the past.
|
||||
func (m *Impl[K, V]) expire(key K) bool {
|
||||
// Use a single write lock to avoid RUnlock->Lock convoy.
|
||||
m.lock.Lock()
|
||||
defer m.lock.Unlock()
|
||||
m.lock.RLock()
|
||||
val, ok := m.data[key]
|
||||
m.lock.RUnlock()
|
||||
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
m.lock.Lock()
|
||||
val.expiry = time.Now().Add(-1 * time.Second)
|
||||
m.data[key] = val
|
||||
m.lock.Unlock()
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
@@ -67,14 +53,19 @@ func (m *Impl[K, V]) expire(key K) bool {
|
||||
// If the value does not exist, return false. Return true after
|
||||
// deletion.
|
||||
func (m *Impl[K, V]) Delete(key K) bool {
|
||||
// Use a single write lock to avoid RUnlock->Lock convoy.
|
||||
m.lock.Lock()
|
||||
defer m.lock.Unlock()
|
||||
m.lock.RLock()
|
||||
_, ok := m.data[key]
|
||||
if ok {
|
||||
delete(m.data, key)
|
||||
m.lock.RUnlock()
|
||||
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
return ok
|
||||
|
||||
m.lock.Lock()
|
||||
delete(m.data, key)
|
||||
m.lock.Unlock()
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// Get gets a value from the DecayMap by key.
|
||||
@@ -90,12 +81,13 @@ func (m *Impl[K, V]) Get(key K) (V, bool) {
|
||||
}
|
||||
|
||||
if time.Now().After(value.expiry) {
|
||||
// Defer decay deletion to the background worker to avoid convoy.
|
||||
select {
|
||||
case m.deleteCh <- deleteReq[K]{key: key, expiry: value.expiry}:
|
||||
default:
|
||||
// Channel full: drop request; a future Cleanup() or Get will retry.
|
||||
m.lock.Lock()
|
||||
// Since previously reading m.data[key], the value may have been updated.
|
||||
// Delete the entry only if the expiry time is still the same.
|
||||
if m.data[key].expiry.Equal(value.expiry) {
|
||||
delete(m.data, key)
|
||||
}
|
||||
m.lock.Unlock()
|
||||
|
||||
return Zilch[V](), false
|
||||
}
|
||||
@@ -133,64 +125,3 @@ func (m *Impl[K, V]) Len() int {
|
||||
defer m.lock.RUnlock()
|
||||
return len(m.data)
|
||||
}
|
||||
|
||||
// Close stops the background cleanup worker. It's optional to call; maps live
|
||||
// for the process lifetime in many cases. Call in tests or when you know you no
|
||||
// longer need the map to avoid goroutine leaks.
|
||||
func (m *Impl[K, V]) Close() {
|
||||
close(m.stopCh)
|
||||
m.wg.Wait()
|
||||
}
|
||||
|
||||
// cleanupWorker batches decay deletions to minimize lock contention.
|
||||
func (m *Impl[K, V]) cleanupWorker() {
|
||||
defer m.wg.Done()
|
||||
batch := make([]deleteReq[K], 0, 64)
|
||||
ticker := time.NewTicker(10 * time.Millisecond)
|
||||
defer ticker.Stop()
|
||||
|
||||
flush := func() {
|
||||
if len(batch) == 0 {
|
||||
return
|
||||
}
|
||||
m.applyDeletes(batch)
|
||||
// reset batch without reallocating
|
||||
batch = batch[:0]
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case req := <-m.deleteCh:
|
||||
batch = append(batch, req)
|
||||
case <-ticker.C:
|
||||
flush()
|
||||
case <-m.stopCh:
|
||||
// Drain any remaining requests then exit
|
||||
for {
|
||||
select {
|
||||
case req := <-m.deleteCh:
|
||||
batch = append(batch, req)
|
||||
default:
|
||||
flush()
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (m *Impl[K, V]) applyDeletes(batch []deleteReq[K]) {
|
||||
now := time.Now()
|
||||
m.lock.Lock()
|
||||
for _, req := range batch {
|
||||
entry, ok := m.data[req.key]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
// Only delete if the expiry is unchanged and already past.
|
||||
if entry.expiry.Equal(req.expiry) && now.After(entry.expiry) {
|
||||
delete(m.data, req.key)
|
||||
}
|
||||
}
|
||||
m.lock.Unlock()
|
||||
}
|
||||
|
||||
@@ -7,7 +7,6 @@ import (
|
||||
|
||||
func TestImpl(t *testing.T) {
|
||||
dm := New[string, string]()
|
||||
t.Cleanup(dm.Close)
|
||||
|
||||
dm.Set("test", "hi", 5*time.Minute)
|
||||
|
||||
@@ -29,24 +28,10 @@ func TestImpl(t *testing.T) {
|
||||
if ok {
|
||||
t.Error("got value even though it was supposed to be expired")
|
||||
}
|
||||
|
||||
// Deletion of expired entries after Get is deferred to a background worker.
|
||||
// Assert it eventually disappears from the map.
|
||||
deadline := time.Now().Add(200 * time.Millisecond)
|
||||
for time.Now().Before(deadline) {
|
||||
if dm.Len() == 0 {
|
||||
break
|
||||
}
|
||||
time.Sleep(5 * time.Millisecond)
|
||||
}
|
||||
if dm.Len() != 0 {
|
||||
t.Fatalf("expected background cleanup to remove expired key; len=%d", dm.Len())
|
||||
}
|
||||
}
|
||||
|
||||
func TestCleanup(t *testing.T) {
|
||||
dm := New[string, string]()
|
||||
t.Cleanup(dm.Close)
|
||||
|
||||
dm.Set("test1", "hi1", 1*time.Second)
|
||||
dm.Set("test2", "hi2", 2*time.Second)
|
||||
|
||||
32
docker-bake.hcl
Normal file
32
docker-bake.hcl
Normal file
@@ -0,0 +1,32 @@
|
||||
variable "ALPINE_VERSION" { default = "3.22" }
|
||||
variable "GITHUB_SHA" { default = "devel" }
|
||||
variable "VERSION" { default = "devel-docker" }
|
||||
|
||||
group "default" {
|
||||
targets = [
|
||||
"osiris",
|
||||
]
|
||||
}
|
||||
|
||||
target "osiris" {
|
||||
args = {
|
||||
ALPINE_VERSION = "3.22"
|
||||
VERSION = "${VERSION}"
|
||||
}
|
||||
context = "."
|
||||
dockerfile = "./docker/osiris.Dockerfile"
|
||||
platforms = [
|
||||
"linux/amd64",
|
||||
"linux/arm64",
|
||||
"linux/arm/v7",
|
||||
"linux/ppc64le",
|
||||
"linux/riscv64",
|
||||
]
|
||||
pull = true
|
||||
sbom = true
|
||||
provenance = true
|
||||
tags = [
|
||||
"ghcr.io/techarohq/anubis/osiris:${VERSION}",
|
||||
"ghcr.io/techarohq/anubis/osiris:main"
|
||||
]
|
||||
}
|
||||
30
docker/osiris.Dockerfile
Normal file
30
docker/osiris.Dockerfile
Normal file
@@ -0,0 +1,30 @@
|
||||
ARG ALPINE_VERSION=edge
|
||||
FROM --platform=${BUILDPLATFORM} alpine:${ALPINE_VERSION} AS build
|
||||
|
||||
RUN apk -U add go nodejs git build-base git npm bash zstd brotli gzip
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY go.mod go.sum ./
|
||||
RUN --mount=type=cache,target=/root/.cache --mount=type=cache,target=/root/go go mod download
|
||||
|
||||
COPY package.json package-lock.json ./
|
||||
RUN npm ci
|
||||
|
||||
ARG TARGETOS
|
||||
ARG TARGETARCH
|
||||
ARG VERSION=devel-docker
|
||||
|
||||
COPY . .
|
||||
RUN --mount=type=cache,target=/root/.cache --mount=type=cache,target=/root/go GOOS=${TARGETOS} GOARCH=${TARGETARCH} CGO_ENABLED=0 GOARM=7 go build -gcflags "all=-N -l" -o /app/bin/osiris -ldflags "-s -w -extldflags -static -X github.com/TecharoHQ/anubis.Version=${VERSION}" ./cmd/osiris
|
||||
|
||||
FROM alpine:${ALPINE_VERSION} AS run
|
||||
WORKDIR /app
|
||||
|
||||
RUN apk -U add ca-certificates mailcap
|
||||
|
||||
COPY --from=build /app/bin/osiris /app/bin/osiris
|
||||
|
||||
CMD ["/app/bin/osiris"]
|
||||
|
||||
LABEL org.opencontainers.image.source="https://github.com/TecharoHQ/anubis"
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM docker.io/library/node:lts AS build
|
||||
FROM docker.io/library/node AS build
|
||||
|
||||
WORKDIR /app
|
||||
COPY . .
|
||||
|
||||
@@ -20,9 +20,9 @@ If you rely on Anubis to keep your website safe, please consider sponsoring the
|
||||
|
||||
I am waiting to hear back from NLNet on if Anubis was selected for funding or not. Let's hope it is!
|
||||
|
||||
## Deprecation warning: `DIFFICULTY`
|
||||
## Deprecation warning: `DEFAULT_DIFFICULTY`
|
||||
|
||||
Anubis v1.20.0 is the last version to support the `DIFFICULTY` flag in the exact way it currently does. In future versions, this will be ineffectual and you should use the [custom threshold system](/docs/admin/configuration/thresholds) instead.
|
||||
Anubis v1.20.0 is the last version to support the `DEFAULT_DIFFICULTY` flag in the exact way it currently does. In future versions, this will be ineffectual and you should use the [custom threshold system](/docs/admin/configuration/thresholds) instead.
|
||||
|
||||
If this becomes an imposition in practice, this will be reverted.
|
||||
|
||||
@@ -161,7 +161,7 @@ One of the first issues in Anubis before it was moved to the [TecharoHQ org](htt
|
||||
|
||||
When Anubis decides it needs to send a challenge to your browser, it sends a challenge page. Historically, this challenge page is [an HTML template](https://github.com/TecharoHQ/anubis/blob/main/web/index.templ) that kicks off some JavaScript, reads the challenge information out of the page body, and then solves it as fast as possible in order to let users see the website they want to visit.
|
||||
|
||||
In v1.20.0, Anubis has a challenge registry to hold [different client challenge implementations](/docs/admin/configuration/challenges/). This allows us to implement anything we want as long as it can render a page to show a challenge and then check if the result is correct. This is going to be used to implement a WebAssembly-based proof of work option (one that will be way more efficient than the existing browser JS version), but as a proof of concept I implemented a simple challenge using [HTML `<meta refresh>`](https://en.wikipedia.org/wiki/Meta_refresh).
|
||||
In v1.20.0, Anubis has a challenge registry to hold [different client challenge implementations](/docs/category/challenges). This allows us to implement anything we want as long as it can render a page to show a challenge and then check if the result is correct. This is going to be used to implement a WebAssembly-based proof of work option (one that will be way more efficient than the existing browser JS version), but as a proof of concept I implemented a simple challenge using [HTML `<meta refresh>`](https://en.wikipedia.org/wiki/Meta_refresh).
|
||||
|
||||
In my testing, this has worked with every browser I have thrown it at (including CLI browsers, the browser embedded in emacs, etc.). The default configuration of Anubis does use the [meta refresh challenge](/docs/admin/configuration/challenges/metarefresh) for [clients with a very low suspicion](/docs/admin/configuration/thresholds), but by default clients will be sent an [easy proof of work challenge](/docs/admin/configuration/challenges/proof-of-work).
|
||||
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 62 KiB |
@@ -1,369 +0,0 @@
|
||||
---
|
||||
slug: release/v1.21.1
|
||||
title: Anubis v1.21.1 is now available!
|
||||
authors: [xe]
|
||||
tags: [release]
|
||||
image: anubis-i18n.webp
|
||||
---
|
||||
|
||||

|
||||
|
||||
Hey all!
|
||||
|
||||
Recently we released [Anubis v1.21.1: Minfilia Warde (Echo 1)](https://github.com/TecharoHQ/anubis/releases/tag/v1.21.1). This is a fairly meaty release and like [last time](../2025-06-27-release-1.20.0/index.mdx) this blogpost will tell you what you need to know before you update. Kick back, get some popcorn and let's dig into this!
|
||||
|
||||
{/* truncate */}
|
||||
|
||||
In this release, Anubis becomes internationalized, gains the ability to use system load as input to issuing challenges, finally fixes the "invalid response" after "success" bug, and more! Please read these notes before upgrading as the changes are big enough that administrators should take action to ensure that the upgrade goes smoothly.
|
||||
|
||||
This release is brought to you by [FreeCAD](https://www.freecad.org/), an open-source computer aided design tool that lets you design things for the real world.
|
||||
|
||||
## What's in this release?
|
||||
|
||||
The biggest change is that the ["invalid response" after "success" bug](https://github.com/TecharoHQ/anubis/issues/564) is now finally fixed for good by totally rewriting how [Anubis' challenge issuance flow works](#challenge-flow-v2).
|
||||
|
||||
This release gives Anubis the following features:
|
||||
|
||||
- [Internationalization support](#internationalization), allowing Anubis to render its messages in the human language you speak.
|
||||
- Anubis now supports the [`missingHeader`](#missingHeader-function) function to assert the absence of headers in requests.
|
||||
- Anubis now has the ability to [store data persistently on the server](#persistent-data-storage).
|
||||
- Anubis can use [the system load average](#load-average-checks) as a factor to determine if it needs to filter traffic or not.
|
||||
- Add `COOKIE_SECURE` option to set the cookie [Secure flag](https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Cookies#block_access_to_your_cookies)
|
||||
- Sets cookie defaults to use [SameSite: None](https://web.dev/articles/samesite-cookies-explained)
|
||||
- Allow [Common Crawl](https://commoncrawl.org/) by default so scrapers have less incentive to scrape
|
||||
- Add `/healthz` metrics route for use in platform-based health checks.
|
||||
- Start exposing JA4H fingerprints for later use in CEL expressions.
|
||||
|
||||
And this release also fixes the following bugs:
|
||||
|
||||
- [Challenge issuance has been totally rewritten](#challenge-flow-v2) to finally squash the infamous ["invalid response" after "success" bug](https://github.com/TecharoHQ/anubis/issues/564) for good.
|
||||
- In order to reduce confusion, the "Success" interstitial that shows up when you pass a proof of work challenge has been removed.
|
||||
- Don't block Anubis starting up if [Thoth](/docs/admin/thoth/) health checks fail.
|
||||
- The "Try again" button on the error page has been fixed. Previously it meant "try the solution again" instead of "try the challenge again".
|
||||
- In certain cases, a user could be stuck with a test cookie that is invalid, locking them out of the service for up to half an hour. This has been fixed with better validation of this case and clearing the cookie.
|
||||
- "Proof of work" has been removed from the branding due to some users having extremely negative connotations with it.
|
||||
|
||||
We try to avoid introducing breaking changes as much as possible, but these are the changes that may be relevant for you as an administrator:
|
||||
|
||||
- The [challenge format](#challenge-format-change) has been changed in order to account for [the new challenge issuance flow](#challenge-flow-v2).
|
||||
- The [systemd service `RuntimeDirectory` has been changed](#breaking-change-systemd-runtimedirectory-change).
|
||||
|
||||
### Sponsoring the project
|
||||
|
||||
If you rely on Anubis to keep your website safe, please consider sponsoring the project on [GitHub Sponsors](https://github.com/sponsors/Xe) or [Patreon](https://patreon.com/cadey). Funding helps pay hosting bills and offset the time spent on making this project the best it can be. Every little bit helps and when enough money is raised, [I can make Anubis my full-time job](https://github.com/TecharoHQ/anubis/discussions/278).
|
||||
|
||||
Once this pie chart is at 100%, I can start to reduce my hours at my day job as most of my needs will be met (pre-tax):
|
||||
|
||||
```mermaid
|
||||
pie title Funding update
|
||||
"GitHub Sponsors" : 29
|
||||
"Patreon" : 14
|
||||
"Remaining" : 56
|
||||
```
|
||||
|
||||
I am waiting to hear back from NLNet on if Anubis was selected for funding or not. Let's hope it is!
|
||||
|
||||
## New features
|
||||
|
||||
### Internationalization
|
||||
|
||||
Anubis now supports localized responses. Locales can be added in [lib/localization/locales/](https://github.com/TecharoHQ/anubis/tree/main/lib/localization/locales). This release includes support for the following languages:
|
||||
|
||||
- [Brazilian Portugese](https://github.com/TecharoHQ/anubis/pull/726)
|
||||
- [Chinese (Simplified)](https://github.com/TecharoHQ/anubis/pull/774)
|
||||
- [Chinese (Traditional)](https://github.com/TecharoHQ/anubis/pull/759)
|
||||
- [Czech](https://github.com/TecharoHQ/anubis/pull/849)
|
||||
- English
|
||||
- [Estonian](https://github.com/TecharoHQ/anubis/pull/783)
|
||||
- [Filipino](https://github.com/TecharoHQ/anubis/pull/775)
|
||||
- [Finnish](https://github.com/TecharoHQ/anubis/pull/863)
|
||||
- [French](https://github.com/TecharoHQ/anubis/pull/716)
|
||||
- [German](https://github.com/TecharoHQ/anubis/pull/741)
|
||||
- [Japanese](https://github.com/TecharoHQ/anubis/pull/772)
|
||||
- [Icelandic](https://github.com/TecharoHQ/anubis/pull/780)
|
||||
- [Italian](https://github.com/TecharoHQ/anubis/pull/778)
|
||||
- [Norwegian](https://github.com/TecharoHQ/anubis/pull/855)
|
||||
- [Russian](https://github.com/TecharoHQ/anubis/pull/882)
|
||||
- [Spanish](https://github.com/TecharoHQ/anubis/pull/716)
|
||||
- [Turkish](https://github.com/TecharoHQ/anubis/pull/751)
|
||||
|
||||
If facts or local regulations demand, you can set Anubis default language with the `FORCED_LANGUAGE` environment variable or the `--forced-language` command line argument:
|
||||
|
||||
```sh
|
||||
FORCED_LANGUAGE=de
|
||||
```
|
||||
|
||||
## Big ticket bug fixes
|
||||
|
||||
These issues affect every user of Anubis. Administrators should upgrade Anubis as soon as possible to mitigate them.
|
||||
|
||||
### Fix event loop thrashing when solving a proof of work challenge
|
||||
|
||||
Anubis has a progress bar so that users can have something moving while it works. This gives users more confidence that something is happening and that the website is not being malicious with CPU usage. However, the way it was implemented way back in [#87](https://github.com/TecharoHQ/anubis/pull/87) had a subtle bug:
|
||||
|
||||
```js
|
||||
if (
|
||||
(nonce > oldNonce) | 1023 && // we've wrapped past 1024
|
||||
(nonce >> 10) % threads === threadId // and it's our turn
|
||||
) {
|
||||
postMessage(nonce);
|
||||
}
|
||||
```
|
||||
|
||||
The logic here looks fine but is subtly wrong as was reported in [#877](https://github.com/TecharoHQ/anubis/issues/877) by the main Pale Moon developer.
|
||||
|
||||
For context, `nonce` is a counter that increments by the worker count every loop. This is intended to spread the load between CPU cores as such:
|
||||
|
||||
| Iteration | Worker ID | Nonce |
|
||||
| :-------- | :-------- | :---- |
|
||||
| 1 | 0 | 0 |
|
||||
| 1 | 1 | 1 |
|
||||
| 2 | 0 | 2 |
|
||||
| 2 | 1 | 3 |
|
||||
|
||||
And so on. This makes the proof of work challenge as fast as it can possibly be so that Anubis quickly goes away and you can enjoy the service it is protecting.
|
||||
|
||||
The incorrect part of this is the boolean logic, specifically the part with the bitwise or `|`. I think the intent was to use a logical or (`||`), but this had the effect of making the `postMessage` handler fire on every iteration. The intent of this snippet (as the comment clearly indicates) is to make sure that the main event loop is only updated with the worker status every 1024 iterations per worker. This had the opposite effect, causing a lot of messages to be sent from workers to the parent JavaScript context.
|
||||
|
||||
This is bad for the event loop.
|
||||
|
||||
Instead, I have ripped out that statement and replaced it with a much simpler increment only counter that fires every 1024 iterations. Additionally, only the first thread communicates back to the parent process. This does mean that in theory the other workers could be ahead of the first thread (posting a message out of a worker has a nonzero cost), but in practice I don't think this will be as much of an issue as the current behaviour is.
|
||||
|
||||
The root cause of the stack exhaustion is likely the pressure caused by all of the postMessage futures piling up. Maybe the larger stack size in 64 bit environments is causing this to be fine there, maybe it's some combination of newer hardware in 64 bit systems making this not be as much of a problem due to it being able to handle events fast enough to keep up with the pressure.
|
||||
|
||||
Either way, thanks much to [@wolfbeast](https://github.com/wolfbeast) and the Pale Moon community for finding this. This will make Anubis faster for everyone!
|
||||
|
||||
### Fix potential memory leak when discovering a solution
|
||||
|
||||
In some cases, the parallel solution finder in Anubis could cause all of the worker promises to leak due to the fact the promises were being improperly terminated. A recursion bomb happens in the following scenario:
|
||||
|
||||
1. A worker sends a message indicating it found a solution to the proof of work challenge.
|
||||
2. The `onmessage` handler for that worker calls `terminate()`
|
||||
3. Inside `terminate()`, the parent process loops through all other workers and calls `w.terminate()` on them.
|
||||
4. It's possible that terminating a worker could lead to the `onerror` event handler.
|
||||
5. This would create a recursive loop of `onmessage` -> `terminate` -> `onerror` -> `terminate` -> `onerror` and so on.
|
||||
|
||||
This infinite recursion quickly consumes all available stack space, but this has never been noticed in development because all of my computers have at least 64Gi of ram provisioned to them under the axiom paying for more ram is cheaper than paying in my time spent having to work around not having enough ram. Additionally, ia32 has a smaller base stack size, which means that they will run into this issue much sooner than users on other CPU architectures will.
|
||||
|
||||
The fix adds a boolean `settled` flag to prevent termination from running more than once.
|
||||
|
||||
## Expressions features
|
||||
|
||||
Anubis v1.21.1 adds additional [expressions](/docs/admin/configuration/expressions) features so that you can make your request matching even more granular.
|
||||
|
||||
### `missingHeader` function
|
||||
|
||||
Anubis [expressions](/docs/admin/configuration/expressions) have [a few functions exposed](/docs/admin/configuration/expressions/#functions-exposed-to-anubis-expressions). Anubis v1.21.1 adds the `missingHeader` function, allowing you to assert the _absence_ of a header in requests.
|
||||
|
||||
Let's say you're getting a lot of requests from clients that are pretending to be Google Chrome. Google Chrome sends a few signals to web servers, the main one of them is the [`Sec-Ch-Ua`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Sec-CH-UA). Sec-CH-UA is part of Google's [User Agent Client Hints](https://wicg.github.io/ua-client-hints/#sec-ch-ua) proposal, but it being present is a sign that the client is more likely Google Chrome than not. With the `missingHeader` function, you can write a rule to [add weight](/docs/admin/policies/#request-weight) to requests without `Sec-Ch-Ua` that claim to be Google Chrome.
|
||||
|
||||
```yaml
|
||||
# Adds weight clients that claim to be Google Chrome without setting Sec-Ch-Ua
|
||||
- name: old-chrome
|
||||
action: WEIGH
|
||||
weight:
|
||||
adjust: 10
|
||||
expression:
|
||||
all:
|
||||
- userAgent.matches("Chrome/[1-9][0-9]?\\.0\\.0\\.0")
|
||||
- missingHeader(headers, "Sec-Ch-Ua")
|
||||
```
|
||||
|
||||
When combined with [weight thresholds](/docs/admin/configuration/thresholds), this allows you to make requests that don't match the signature of Google Chrome more suspicious, which will make them have a more difficult challenge.
|
||||
|
||||
### Load average checks
|
||||
|
||||
Anubis can dynamically take action [based on the system load average](/docs/admin/configuration/expressions/#using-the-system-load-average), allowing you to write rules like this:
|
||||
|
||||
```yaml
|
||||
## System load based checks.
|
||||
# If the system is under high load for the last minute, add weight.
|
||||
- name: high-load-average
|
||||
action: WEIGH
|
||||
expression: load_1m >= 10.0 # make sure to end the load comparison in a .0
|
||||
weight:
|
||||
adjust: 20
|
||||
|
||||
# If it is not for the last 15 minutes, remove weight.
|
||||
- name: low-load-average
|
||||
action: WEIGH
|
||||
expression: load_15m <= 4.0 # make sure to end the load comparison in a .0
|
||||
weight:
|
||||
adjust: -10
|
||||
```
|
||||
|
||||
Something to keep in mind about system load average is that it is not aware of the number of cores the system has. If you have a 16 core system that has 16 processes running but none of them is hogging the CPU, then you will get a load average below 16. If you are in doubt, make your "high load" metric at least two times the number of CPU cores and your "low load" metric at least half of the number of CPU cores. For example:
|
||||
|
||||
| Kind | Core count | Load threshold |
|
||||
| --------: | :--------- | :------------- |
|
||||
| high load | 4 | `8.0` |
|
||||
| low load | 4 | `2.0` |
|
||||
| high load | 16 | `32.0` |
|
||||
| low load | 16 | `8` |
|
||||
|
||||
Also keep in mind that this does not account for other kinds of latency like I/O latency or downstream API response latency. A system can have its web applications unresponsive due to high latency from a MySQL server but still have that web application server report a load near or at zero.
|
||||
|
||||
:::note
|
||||
|
||||
This does not work if you are using Kubernetes.
|
||||
|
||||
:::
|
||||
|
||||
When combined with [weight thresholds](/docs/admin/configuration/thresholds), this allows you to make incoming sessions "back off" while the server is under high load.
|
||||
|
||||
## Challenge flow v2
|
||||
|
||||
The main goal of Anubis is to weigh the risks of incoming requests in order to protect upstream resources against abusive clients like badly written scrapers. In order to separate "good" clients (like users wanting to learn from a website's content) from "bad" clients, Anubis issues [challenges](/docs/admin/configuration/challenges/).
|
||||
|
||||
Previously the Anubis challenge flow looked like this:
|
||||
|
||||
```mermaid
|
||||
---
|
||||
title: Old Anubis challenge flow
|
||||
---
|
||||
flowchart LR
|
||||
user(User Browser)
|
||||
subgraph Anubis
|
||||
mIC{Challenge?}
|
||||
ic(Issue Challenge)
|
||||
rp(Proxy to service)
|
||||
mIC -->|User needs a challenge| ic
|
||||
mIC -->|User does not need a challenge| rp
|
||||
end
|
||||
target(Target Service)
|
||||
rp --> target
|
||||
user --> mIC
|
||||
ic -->|Pass a challenge| user
|
||||
target -->|Site data| users
|
||||
```
|
||||
|
||||
In order to issue a challenge, Anubis generated a challenge string based on request metadata that we assumed wouldn't drastically change between requests, including but not limited to:
|
||||
|
||||
- The client's User-Agent string.
|
||||
- The client [`Accept-Language` header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Accept-Language) value.
|
||||
- The client's IP address.
|
||||
|
||||
Anubis also didn't store any information about challenges so that it can remain lightweight and handle the onslaught of requests from scrapers. The assumption was that the challenge string function was idempotent per client across time. What actually ended up happening was something like this:
|
||||
|
||||
```mermaid
|
||||
---
|
||||
title: Anubis challenge string idempotency
|
||||
---
|
||||
sequenceDiagram
|
||||
User->>+Anubis: GET /wiki/some-page
|
||||
Anubis->>+Make Challenge: Generate a challenge string
|
||||
Make Challenge->>-Anubis: Challenge string: taco salad
|
||||
Anubis->>-User: HTTP 401 solve a challenge
|
||||
User->>+Anubis: GET internal-api/pass-challenge
|
||||
Anubis->>+Make Challenge: Generate a challenge string
|
||||
Make Challenge->>-Anubis: Challenge string: burrito bar
|
||||
Anubis->>+User: Error: invalid response
|
||||
```
|
||||
|
||||
Various attempts were made to fix this. All of these ended up failing. Many difficulties were discovered including but not limited to:
|
||||
|
||||
- Removing `Accept-Language` from consideration because [Chrome randomizes the contents of `Accept-Language` to reduce fingerprinting](https://github.com/explainers-by-googlers/reduce-accept-language), a behaviour which [causes a lot of confusion](https://www.reddit.com/r/chrome/comments/nhpnez/google_chrome_is_randomly_switching_languages_on/) for users with multiple system languages selected.
|
||||
- [IPv6 privacy extensions](https://www.internetsociety.org/resources/deploy360/2014/privacy-extensions-for-ipv6-slaac/) mean that each request could be coming from a different IP address (at least one legitimate user in the wild has been observed to have a different IP address per TCP session across an entire `/48`).
|
||||
- Some [US mobile phone carriers make it too easy for your IP address to drastically change](https://news.ycombinator.com/item?id=32038215) without user input.
|
||||
- [Happy eyeballs](https://en.wikipedia.org/wiki/Happy_Eyeballs) means that some requests can come in over IPv4 and some requests can come in over IPv6.
|
||||
- To make things worse, you can't even assert that users are from the same [BGP autonomous system](<https://en.wikipedia.org/wiki/Autonomous_system_(Internet)>) because some users could have ISPs that are IPv4 only, forcing them to use a different IP address space to get IPv6 internet access. This sounds like it's rare enough, but I personally have to do this even though I pay for 8 gigabit fiber from my ISP and only get IPv4 service from them.
|
||||
|
||||
Amusingly enough, the only part of this that has survived is the assertion that a user hasn't changed their `User-Agent` string. Maybe [that one guy that sets his Chrome version to `150`](https://github.com/TecharoHQ/anubis/issues/239) would have issues, but so far I've not seen any evidence that a client randomly changing their user agent between challenge issuance and solving can possibly be legitimate.
|
||||
|
||||
As a result, the entire subsystem that generated challenges before had to be ripped out and rewritten from scratch.
|
||||
|
||||
It was replaced with a new flow that stores data on the server side, compares that data against what the client responds with, and then checks pass/fail that way:
|
||||
|
||||
```mermaid
|
||||
---
|
||||
title: New challenge flow
|
||||
---
|
||||
sequenceDiagram
|
||||
User->>+Anubis: GET /wiki/some-page
|
||||
Anubis->>+Make Challenge: Generate a challenge string
|
||||
Make Challenge->>+Store: Store info for challenge 1234
|
||||
Make Challenge->>-Anubis: Challenge string: taco salad, ID 1234
|
||||
Anubis->>-User: HTTP 401 solve a challenge
|
||||
User->>+Anubis: GET internal-api/pass-challenge, challenge 1234
|
||||
Anubis->>+Validate Challenge: verify challenge 1234
|
||||
Validate Challenge->>+Store: Get info for challenge 1234
|
||||
Store->>-Validate Challenge: Here you go!
|
||||
Validate Challenge->>-Anubis: Valid ✅
|
||||
Anubis->>+User: Here's a cookie to get past Anubis
|
||||
```
|
||||
|
||||
As a result, the [challenge format](#challenge-format-change) had to change. Old cookies will still be validated, but the next minor version (v1.22.0) will include validation to ensure that all challenges are accounted for on the server side. This data is stored in the active [storage backend](/docs/admin/policies/#storage-backends) for up to 30 minutes. This also fixes [#746](https://github.com/TecharoHQ/anubis/issues/746) and other similar instances of this issue.
|
||||
|
||||
### Challenge format change
|
||||
|
||||
Previously Anubis did no accounting for challenges that it issued. This means that if Anubis restarted during a client, the client would be able to proceed once Anubis came back online.
|
||||
|
||||
During the upgrade to v1.21.0 and when v1.21.0 (or later) restarts with the [in-memory storage backend](/docs/admin/policies/#memory), you may see a higher rate of failed challenges than normal. If this persists beyond a few minutes, [open an issue](https://github.com/TecharoHQ/anubis/issues/new).
|
||||
|
||||
If you are using the in-memory storage backend, please consider using [a different storage backend](/docs/admin/policies/#storage-backends).
|
||||
|
||||
### Storage
|
||||
|
||||
Anubis offers a few different storage backends depending on your needs:
|
||||
|
||||
| Backend | Description |
|
||||
| :--------------------------------------- | :------------------------------------------------------------------------------------------------------------- |
|
||||
| [`memory`](/docs/admin/policies/#memory) | An in-memory hashmap that is cleared when Anubis is restarted. |
|
||||
| [`bbolt`](/docs/admin/policies/#bbolt) | A memory-mapped key/value store that can persist between Anubis restarts. |
|
||||
| [`valkey`](/docs/admin/policies/#valkey) | A networked key/value store that can persist between Anubis restarts and coordinate across multiple instances. |
|
||||
|
||||
Please review the documentation for each storage method to figure out the one best for your needs. If you aren't sure, consult this diagram:
|
||||
|
||||
```mermaid
|
||||
---
|
||||
title: What storage backend do I need?
|
||||
---
|
||||
flowchart TD
|
||||
OneInstance{Do you only have
|
||||
one instance of
|
||||
Anubis?}
|
||||
Persistence{Do you have
|
||||
persistent disk
|
||||
access in your
|
||||
environment?}
|
||||
bbolt[(bbolt)]
|
||||
memory[(memory)]
|
||||
valkey[(valkey)]
|
||||
OneInstance -->|Yes| Persistence
|
||||
OneInstance -->|No| valkey
|
||||
Persistence -->|Yes| bbolt
|
||||
Persistence -->|No| memory
|
||||
```
|
||||
|
||||
## Breaking change: systemd `RuntimeDirectory` change
|
||||
|
||||
The following potentially breaking change applies to native installs with systemd only:
|
||||
|
||||
Each instance of systemd service template now has a unique `RuntimeDirectory`, as opposed to each instance of the service sharing a `RuntimeDirectory`. This change was made to avoid [the `RuntimeDirectory` getting nuked](https://github.com/TecharoHQ/anubis/issues/748) any time one of the Anubis instances restarts.
|
||||
|
||||
If you configured Anubis' unix sockets to listen on `/run/anubis/foo.sock` for instance `anubis@foo`, you will need to configure Anubis to listen on `/run/anubis/foo/foo.sock` and additionally configure your HTTP load balancer as appropriate.
|
||||
|
||||
If you need the legacy behaviour, install this [systemd unit dropin](https://www.flatcar.org/docs/latest/setup/systemd/drop-in-units/):
|
||||
|
||||
```systemd
|
||||
# /etc/systemd/system/anubis@.service.d/50-runtimedir.conf
|
||||
[Service]
|
||||
RuntimeDirectory=anubis
|
||||
```
|
||||
|
||||
Just keep in mind that this will cause problems when Anubis restarts.
|
||||
|
||||
## What's up next?
|
||||
|
||||
The biggest things we want to do in the next release (in no particular order):
|
||||
|
||||
- A rewrite of bot checking rule configuration syntax to make it less ambiguous.
|
||||
- [JA4](https://blog.foxio.io/ja4+-network-fingerprinting) (and other forms of) fingerprinting and coordination with [Thoth](/docs/admin/thoth/) to allow clients with high aggregate pass rates through without seeing Anubis at all.
|
||||
- Advanced heuristics for [users of the unbranded variant of Anubis](/docs/admin/botstopper/).
|
||||
- Optimize the release flow so that releases can be triggered and executed by continuous integration tools. The ultimate goal is to make it possible to release Anubis in 15 minutes after pressing a single "mint release" button.
|
||||
- Add "hot reloading" support to Anubis, allowing administrators to update the rules without restarting the service.
|
||||
- Fix [multiple slash support](https://github.com/TecharoHQ/anubis/issues/754) for web applications that require optional path variables.
|
||||
- Add weight to "brand new" clients.
|
||||
- Implement a "maze" feature that tries to get crawlers ensnared in a maze of random links so that clients that are more than 20 links in can be reported to the home base.
|
||||
- Open [Thoth-based advanced checks](/docs/admin/thoth/) to more users with an easier onboarding flow.
|
||||
- More smoke tests including for browsers like [Pale Moon](https://www.palemoon.org/).
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 14 KiB |
@@ -1,60 +0,0 @@
|
||||
---
|
||||
slug: 2025/funding-update
|
||||
title: Funding update
|
||||
authors: [xe]
|
||||
tags: [funding]
|
||||
image: around-the-bend.webp
|
||||
---
|
||||
|
||||

|
||||
|
||||
As we finish up work on [all of the features in the next release of Anubis](/docs/CHANGELOG#unreleased), I took a moment to add up the financials and here's an update on the recurring revenue of the project. Once I reach the [$5000 per month](https://github.com/TecharoHQ/anubis/discussions/278) mark, I can start reducing hours at my dayjob and start to make working on Anubis my full time job.
|
||||
|
||||
{/* truncate */}
|
||||
|
||||
Note that this only counts _recurring_ revenue (subscriptions to [BotStopper](/docs/admin/botstopper) and monthly repeating donations). Every one of the one-time donations I get is a gift and I am grateful for them, but I cannot make critically important financial decisions off of sporadic one-time donations.
|
||||
|
||||
:::note
|
||||
|
||||
All currency figures in this article are USD (United States Dollars) unless denoted otherwise.
|
||||
|
||||
:::
|
||||
|
||||
Here's the funding breakdown by income stream:
|
||||
|
||||
```mermaid
|
||||
pie title Funding update August 2025
|
||||
"GitHub Sponsors" : 3500
|
||||
"Patreon" : 1500
|
||||
"Liberapay" : 100
|
||||
"Remaining" : 4800
|
||||
```
|
||||
|
||||
Assuming that some of my private support contracts and other sales effort go through, this will slightly change the shapes of this (a new pie chart segment will emerge for "Manual invoices"), but I am halfway there. This is a huge bar to pass and as it stands right now this is just enough income to pay for my monthly rent (not accounting for tax).
|
||||
|
||||
As a reminder, here's the rough plan for the phases I want to hit based on the _recurring_ donation totals:
|
||||
|
||||
| Monthly donations | Details |
|
||||
| :-------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| $0-5,000 per month | Anubis is a nights and weekends project based on how much spare time and energy I have. |
|
||||
| $5,000-10,000 per month | Anubis gets 1-2 days per week of my time put into it consistently and I go part-time at my dayjob. |
|
||||
| $10,000-15,000 per month | Anubis becomes my full time job. Features that are currently exclusive to [BotStopper](/docs/admin/botstopper/) start to trickle down to the open source version of Anubis. |
|
||||
| $15,000 per month and above | I start planning hiring for Techaro. |
|
||||
|
||||
If your organization benefits from Anubis, please consider donating to the project in order to make this sustainable. The fewer financial problems I have means the more that Anubis can become better.
|
||||
|
||||
## New funding platform: Liberapay
|
||||
|
||||
After many comments about the funding options, I have set up [Liberapay](https://liberapay.com/Xe/) as an option to receive donations. Additional funding targets will be added to Liberapay as soon as I hear back from my accountant with more information. All money received via Liberapay goes directly towards supporting the project.
|
||||
|
||||
## Next goals
|
||||
|
||||
Here's my short term goals for the immediate future:
|
||||
|
||||
1. Finish [Thoth](/docs/admin/thoth/) and run a backfill to mass issue API keys.
|
||||
2. Document and publish the writeup for the multi-region Google Cloud spot instance setup that Thoth is built upon.
|
||||
3. Release v1.22.0 of Anubis with Traefik support and other important fixes.
|
||||
4. Continue growing the project into a sustainable business.
|
||||
5. Work through the [blog backlog](https://github.com/TecharoHQ/anubis/issues?q=is%3Aissue%20state%3Aopen%20label%3Ablog) to document the thoughts behind Anubis and how parts of it work.
|
||||
|
||||
Thank you for supporting Anubis! It's only going to get better from here.
|
||||
@@ -1,214 +0,0 @@
|
||||
import React, { useState, useEffect, useMemo } from 'react';
|
||||
import styles from './styles.module.css';
|
||||
|
||||
// A helper function to perform SHA-256 hashing.
|
||||
// It takes a string, encodes it, hashes it, and returns a hex string.
|
||||
async function sha256(message) {
|
||||
try {
|
||||
const msgBuffer = new TextEncoder().encode(message);
|
||||
const hashBuffer = await crypto.subtle.digest('SHA-256', msgBuffer);
|
||||
const hashArray = Array.from(new Uint8Array(hashBuffer));
|
||||
const hashHex = hashArray.map(b => b.toString(16).padStart(2, '0')).join('');
|
||||
return hashHex;
|
||||
} catch (error) {
|
||||
console.error("Hashing failed:", error);
|
||||
return "Error hashing data";
|
||||
}
|
||||
}
|
||||
|
||||
// Generates a random hex string of a given byte length
|
||||
const generateRandomHex = (bytes = 16) => {
|
||||
const buffer = new Uint8Array(bytes);
|
||||
crypto.getRandomValues(buffer);
|
||||
return Array.from(buffer)
|
||||
.map(byte => byte.toString(16).padStart(2, '0'))
|
||||
.join('');
|
||||
};
|
||||
|
||||
|
||||
// Icon components for better visual feedback
|
||||
const CheckIcon = () => (
|
||||
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconGreen} fill="none" viewBox="0 0 24 24" stroke="currentColor">
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M9 12l2 2 4-4m6 2a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||
</svg>
|
||||
);
|
||||
|
||||
const XCircleIcon = () => (
|
||||
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconRed} fill="none" viewBox="0 0 24 24" stroke="currentColor">
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M10 14l2-2m0 0l2-2m-2 2l-2-2m2 2l2 2m7-2a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||
</svg>
|
||||
);
|
||||
|
||||
// Main Application Component
|
||||
export default function App() {
|
||||
// State for the challenge, initialized with a random 16-byte hex string.
|
||||
const [challenge, setChallenge] = useState(() => generateRandomHex(16));
|
||||
// State for the nonce, which is the variable we can change
|
||||
const [nonce, setNonce] = useState(0);
|
||||
// State to store the resulting hash
|
||||
const [hash, setHash] = useState('');
|
||||
// A flag to indicate if the current hash is the "winning" one
|
||||
const [isMining, setIsMining] = useState(false);
|
||||
const [isFound, setIsFound] = useState(false);
|
||||
|
||||
// The mining difficulty, i.e., the required number of leading zeros
|
||||
const difficulty = "00";
|
||||
|
||||
// Memoize the combined data to avoid recalculating on every render
|
||||
const combinedData = useMemo(() => `${challenge}${nonce}`, [challenge, nonce]);
|
||||
|
||||
// This effect hook recalculates the hash whenever the combinedData changes.
|
||||
useEffect(() => {
|
||||
let isMounted = true;
|
||||
const calculateHash = async () => {
|
||||
const calculatedHash = await sha256(combinedData);
|
||||
if (isMounted) {
|
||||
setHash(calculatedHash);
|
||||
setIsFound(calculatedHash.startsWith(difficulty));
|
||||
}
|
||||
};
|
||||
calculateHash();
|
||||
return () => { isMounted = false; };
|
||||
}, [combinedData, difficulty]);
|
||||
|
||||
// This effect handles the automatic mining process
|
||||
useEffect(() => {
|
||||
if (!isMining) return;
|
||||
|
||||
let miningNonce = nonce;
|
||||
let continueMining = true;
|
||||
|
||||
const mine = async () => {
|
||||
while (continueMining) {
|
||||
const currentData = `${challenge}${miningNonce}`;
|
||||
const currentHash = await sha256(currentData);
|
||||
|
||||
if (currentHash.startsWith(difficulty)) {
|
||||
setNonce(miningNonce);
|
||||
setIsMining(false);
|
||||
break;
|
||||
}
|
||||
|
||||
miningNonce++;
|
||||
// Update the UI periodically to avoid freezing the browser
|
||||
if (miningNonce % 100 === 0) {
|
||||
setNonce(miningNonce);
|
||||
await new Promise(resolve => setTimeout(resolve, 0)); // Yield to the browser
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
mine();
|
||||
|
||||
return () => {
|
||||
continueMining = false;
|
||||
}
|
||||
}, [isMining, challenge, nonce, difficulty]);
|
||||
|
||||
|
||||
const handleMineClick = () => {
|
||||
setIsMining(true);
|
||||
}
|
||||
|
||||
const handleStopClick = () => {
|
||||
setIsMining(false);
|
||||
}
|
||||
|
||||
const handleResetClick = () => {
|
||||
setIsMining(false);
|
||||
setNonce(0);
|
||||
}
|
||||
|
||||
const handleNewChallengeClick = () => {
|
||||
setIsMining(false);
|
||||
setChallenge(generateRandomHex(16));
|
||||
setNonce(0);
|
||||
}
|
||||
|
||||
// Helper to render the hash with colored leading characters
|
||||
const renderHash = () => {
|
||||
if (!hash) return <span>...</span>;
|
||||
const prefix = hash.substring(0, difficulty.length);
|
||||
const suffix = hash.substring(difficulty.length);
|
||||
const prefixColor = isFound ? styles.hashPrefixGreen : styles.hashPrefixRed;
|
||||
return (
|
||||
<>
|
||||
<span className={`${prefixColor} ${styles.hashPrefix}`}>{prefix}</span>
|
||||
<span className={styles.hashSuffix}>{suffix}</span>
|
||||
</>
|
||||
);
|
||||
};
|
||||
|
||||
return (
|
||||
<div className={styles.container}>
|
||||
<div className={styles.innerContainer}>
|
||||
<div className={styles.grid}>
|
||||
{/* Challenge Block */}
|
||||
<div className={styles.block}>
|
||||
<h2 className={styles.blockTitle}>1. Challenge</h2>
|
||||
<p className={styles.challengeText}>{challenge}</p>
|
||||
</div>
|
||||
|
||||
{/* Nonce Control Block */}
|
||||
<div className={styles.block}>
|
||||
<h2 className={styles.blockTitle}>2. Nonce</h2>
|
||||
<div className={styles.nonceControls}>
|
||||
<button onClick={() => setNonce(n => n - 1)} disabled={isMining} className={styles.nonceButton}>
|
||||
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconSmall} fill="none" viewBox="0 0 24 24" stroke="currentColor"><path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M20 12H4" /></svg>
|
||||
</button>
|
||||
<span className={styles.nonceValue}>{nonce}</span>
|
||||
<button onClick={() => setNonce(n => n + 1)} disabled={isMining} className={styles.nonceButton}>
|
||||
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconSmall} fill="none" viewBox="0 0 24 24" stroke="currentColor"><path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M12 4v16m8-8H4" /></svg>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Combined Data Block */}
|
||||
<div className={styles.block}>
|
||||
<h2 className={styles.blockTitle}>3. Combined Data</h2>
|
||||
<p className={styles.combinedDataText}>{combinedData}</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Arrow pointing down */}
|
||||
<div className={styles.arrowContainer}>
|
||||
<svg xmlns="http://www.w3.org/2000/svg" className={styles.iconGray} fill="none" viewBox="0 0 24 24" stroke="currentColor">
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M19 14l-7 7m0 0l-7-7m7 7V3" />
|
||||
</svg>
|
||||
</div>
|
||||
|
||||
{/* Hash Output Block */}
|
||||
<div className={`${styles.hashContainer} ${isFound ? styles.hashContainerSuccess : styles.hashContainerError}`}>
|
||||
<div className={styles.hashContent}>
|
||||
<div className={styles.hashText}>
|
||||
<h2 className={styles.blockTitle}>4. Resulting Hash (SHA-256)</h2>
|
||||
<p className={styles.hashValue}>{renderHash()}</p>
|
||||
</div>
|
||||
<div className={styles.hashIcon}>
|
||||
{isFound ? <CheckIcon /> : <XCircleIcon />}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Mining Controls */}
|
||||
<div className={styles.buttonContainer}>
|
||||
{!isMining ? (
|
||||
<button onClick={handleMineClick} className={`${styles.button} ${styles.buttonCyan}`}>
|
||||
Auto-Mine
|
||||
</button>
|
||||
) : (
|
||||
<button onClick={handleStopClick} className={`${styles.button} ${styles.buttonYellow}`}>
|
||||
Stop Mining
|
||||
</button>
|
||||
)}
|
||||
<button onClick={handleNewChallengeClick} className={`${styles.button} ${styles.buttonIndigo}`}>
|
||||
New Challenge
|
||||
</button>
|
||||
<button onClick={handleResetClick} className={`${styles.button} ${styles.buttonGray}`}>
|
||||
Reset Nonce
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -1,366 +0,0 @@
|
||||
/* Main container styles */
|
||||
.container {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
color: white;
|
||||
font-family: ui-sans-serif, system-ui, sans-serif;
|
||||
margin-top: 2rem;
|
||||
margin-bottom: 2rem;
|
||||
}
|
||||
|
||||
.innerContainer {
|
||||
width: 100%;
|
||||
max-width: 56rem;
|
||||
margin: 0 auto;
|
||||
}
|
||||
|
||||
/* Header styles */
|
||||
.header {
|
||||
text-align: center;
|
||||
margin-bottom: 2.5rem;
|
||||
}
|
||||
|
||||
.title {
|
||||
font-size: 2.25rem;
|
||||
font-weight: 700;
|
||||
color: rgb(34 211 238);
|
||||
}
|
||||
|
||||
.subtitle {
|
||||
font-size: 1.125rem;
|
||||
color: rgb(156 163 175);
|
||||
margin-top: 0.5rem;
|
||||
}
|
||||
|
||||
/* Grid layout styles */
|
||||
.grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(3, 1fr);
|
||||
gap: 1rem;
|
||||
align-items: center;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
/* Block styles */
|
||||
.block {
|
||||
background-color: rgb(31 41 55);
|
||||
padding: 1.5rem;
|
||||
border-radius: 0.5rem;
|
||||
box-shadow: 0 10px 15px -3px rgb(0 0 0 / 0.1), 0 4px 6px -4px rgb(0 0 0 / 0.1);
|
||||
height: 100%;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
.blockTitle {
|
||||
font-size: 1.125rem;
|
||||
font-weight: 600;
|
||||
color: rgb(34 211 238);
|
||||
margin-bottom: 0.5rem;
|
||||
}
|
||||
|
||||
.challengeText {
|
||||
font-size: 0.875rem;
|
||||
color: rgb(209 213 219);
|
||||
word-break: break-all;
|
||||
font-family: ui-monospace, SFMono-Regular, monospace;
|
||||
}
|
||||
|
||||
.combinedDataText {
|
||||
font-size: 0.875rem;
|
||||
color: rgb(156 163 175);
|
||||
word-break: break-all;
|
||||
font-family: ui-monospace, SFMono-Regular, monospace;
|
||||
}
|
||||
|
||||
/* Nonce control styles */
|
||||
.nonceControls {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
.nonceButton {
|
||||
background-color: rgb(55 65 81);
|
||||
border-radius: 9999px;
|
||||
padding: 0.5rem;
|
||||
transition: background-color 200ms;
|
||||
}
|
||||
|
||||
.nonceButton:hover:not(:disabled) {
|
||||
background-color: rgb(34 211 238);
|
||||
}
|
||||
|
||||
.nonceButton:disabled {
|
||||
opacity: 0.5;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
.nonceValue {
|
||||
font-size: 1.5rem;
|
||||
font-family: ui-monospace, SFMono-Regular, monospace;
|
||||
width: 6rem;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
/* Icon styles */
|
||||
.icon {
|
||||
height: 2rem;
|
||||
width: 2rem;
|
||||
}
|
||||
|
||||
.iconGreen {
|
||||
height: 2rem;
|
||||
width: 2rem;
|
||||
color: rgb(74 222 128);
|
||||
}
|
||||
|
||||
.iconRed {
|
||||
height: 2rem;
|
||||
width: 2rem;
|
||||
color: rgb(248 113 113);
|
||||
}
|
||||
|
||||
.iconSmall {
|
||||
height: 1.5rem;
|
||||
width: 1.5rem;
|
||||
}
|
||||
|
||||
.iconGray {
|
||||
height: 2.5rem;
|
||||
width: 2.5rem;
|
||||
color: rgb(75 85 99);
|
||||
animation: pulse 2s cubic-bezier(0.4, 0, 0.6, 1) infinite;
|
||||
}
|
||||
|
||||
/* Arrow animation */
|
||||
@keyframes pulse {
|
||||
0%,
|
||||
100% {
|
||||
opacity: 1;
|
||||
}
|
||||
50% {
|
||||
opacity: 0.5;
|
||||
}
|
||||
}
|
||||
|
||||
.arrowContainer {
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
margin: 1.5rem 0;
|
||||
}
|
||||
|
||||
/* Hash output styles */
|
||||
.hashContainer {
|
||||
padding: 1.5rem;
|
||||
border-radius: 0.5rem;
|
||||
box-shadow: 0 10px 15px -3px rgb(0 0 0 / 0.1), 0 4px 6px -4px rgb(0 0 0 / 0.1);
|
||||
transition: all 300ms;
|
||||
border: 2px solid;
|
||||
}
|
||||
|
||||
.hashContainerSuccess {
|
||||
background-color: rgb(20 83 45 / 0.5);
|
||||
border-color: rgb(74 222 128);
|
||||
}
|
||||
|
||||
.hashContainerError {
|
||||
background-color: rgb(127 29 29 / 0.5);
|
||||
border-color: rgb(248 113 113);
|
||||
}
|
||||
|
||||
.hashContent {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
}
|
||||
|
||||
.hashText {
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.hashTextLg {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.hashValue {
|
||||
font-size: 0.875rem;
|
||||
word-break: break-all;
|
||||
}
|
||||
|
||||
.hashValueLg {
|
||||
font-size: 1rem;
|
||||
word-break: break-all;
|
||||
}
|
||||
|
||||
.hashIcon {
|
||||
margin-top: 1rem;
|
||||
}
|
||||
|
||||
.hashIconLg {
|
||||
margin-top: 0;
|
||||
}
|
||||
|
||||
/* Hash highlighting */
|
||||
.hashPrefix {
|
||||
font-family: ui-monospace, SFMono-Regular, monospace;
|
||||
}
|
||||
|
||||
.hashPrefixGreen {
|
||||
color: rgb(74 222 128);
|
||||
}
|
||||
|
||||
.hashPrefixRed {
|
||||
color: rgb(248 113 113);
|
||||
}
|
||||
|
||||
.hashSuffix {
|
||||
font-family: ui-monospace, SFMono-Regular, monospace;
|
||||
color: rgb(156 163 175);
|
||||
}
|
||||
|
||||
/* Button styles */
|
||||
.buttonContainer {
|
||||
margin-top: 2rem;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
.button {
|
||||
font-weight: 700;
|
||||
padding: 0.75rem 1.5rem;
|
||||
border-radius: 0.5rem;
|
||||
transition: transform 150ms;
|
||||
}
|
||||
|
||||
.button:hover {
|
||||
transform: scale(1.05);
|
||||
}
|
||||
|
||||
.buttonCyan {
|
||||
background-color: rgb(8 145 178);
|
||||
color: white;
|
||||
}
|
||||
|
||||
.buttonCyan:hover {
|
||||
background-color: rgb(6 182 212);
|
||||
}
|
||||
|
||||
.buttonYellow {
|
||||
background-color: rgb(202 138 4);
|
||||
color: white;
|
||||
}
|
||||
|
||||
.buttonYellow:hover {
|
||||
background-color: rgb(245 158 11);
|
||||
}
|
||||
|
||||
.buttonIndigo {
|
||||
background-color: rgb(79 70 229);
|
||||
color: white;
|
||||
}
|
||||
|
||||
.buttonIndigo:hover {
|
||||
background-color: rgb(99 102 241);
|
||||
}
|
||||
|
||||
.buttonGray {
|
||||
background-color: rgb(55 65 81);
|
||||
color: white;
|
||||
}
|
||||
|
||||
.buttonGray:hover {
|
||||
background-color: rgb(75 85 99);
|
||||
}
|
||||
|
||||
/* Responsive styles */
|
||||
@media (min-width: 768px) {
|
||||
.title {
|
||||
font-size: 3rem;
|
||||
}
|
||||
|
||||
.grid {
|
||||
grid-template-columns: repeat(3, 1fr);
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
.hashContent {
|
||||
flex-direction: row;
|
||||
}
|
||||
|
||||
.hashText {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.hashValue {
|
||||
font-size: 1rem;
|
||||
}
|
||||
|
||||
.hashIcon {
|
||||
margin-top: 0;
|
||||
}
|
||||
}
|
||||
|
||||
@media (max-width: 767px) {
|
||||
.grid {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 1rem;
|
||||
}
|
||||
}
|
||||
|
||||
@media (prefers-color-scheme: light) {
|
||||
.block {
|
||||
background-color: oklch(93% 0.034 272.788);
|
||||
}
|
||||
|
||||
.challengeText {
|
||||
color: oklch(12.9% 0.042 264.695);
|
||||
}
|
||||
|
||||
.combinedDataText {
|
||||
color: oklch(12.9% 0.042 264.695);
|
||||
}
|
||||
|
||||
.nonceButton {
|
||||
background-color: oklch(88.2% 0.059 254.128);
|
||||
}
|
||||
|
||||
.nonceValue {
|
||||
color: oklch(12.9% 0.042 264.695);
|
||||
}
|
||||
|
||||
.blockTitle {
|
||||
color: oklch(45% 0.085 224.283);
|
||||
}
|
||||
|
||||
.hashContainerSuccess {
|
||||
background-color: oklch(95% 0.052 163.051);
|
||||
border-color: rgb(74 222 128);
|
||||
}
|
||||
|
||||
.hashContainerError {
|
||||
background-color: oklch(94.1% 0.03 12.58);
|
||||
border-color: rgb(248 113 113);
|
||||
}
|
||||
|
||||
.hashPrefixGreen {
|
||||
color: oklch(53.2% 0.157 131.589);
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.hashPrefixRed {
|
||||
color: oklch(45.5% 0.188 13.697);
|
||||
}
|
||||
|
||||
.hashSuffix {
|
||||
color: oklch(27.9% 0.041 260.031);
|
||||
}
|
||||
}
|
||||
@@ -1,129 +0,0 @@
|
||||
---
|
||||
slug: 2025/cpu-core-odd
|
||||
title: Sometimes CPU cores are odd
|
||||
description: "TL;DR: all the assumptions you have about processor design are wrong and if you are unlucky you will never run into problems that users do through sheer chance."
|
||||
authors: [xe]
|
||||
tags:
|
||||
- bugfix
|
||||
- implementation
|
||||
image: parc-dsilence.webp
|
||||
---
|
||||
|
||||
import ProofOfWorkDiagram from "./ProofOfWorkDiagram";
|
||||
|
||||

|
||||
|
||||
One of the biggest lessons that I've learned in my career is that all software has bugs, and the more complicated your software gets the more complicated your bugs get. A lot of the time those bugs will be fairly obvious and easy to spot, validate, and replicate. Sometimes, the process of fixing it will uncover your core assumptions about how things work in ways that will leave you feeling like you just got trolled.
|
||||
|
||||
Today I'm going to talk about a single line fix that prevents people on a large number of devices from having weird irreproducible issues with Anubis rejecting people when it frankly shouldn't. Stick around, it's gonna be a wild ride.
|
||||
|
||||
{/* truncate */}
|
||||
|
||||
## How this happened
|
||||
|
||||
Anubis is a web application firewall that tries to make sure that the client is a browser. It uses a few [challenge methods](/docs/admin/configuration/challenges/) to do this determination, but the main method is the [proof of work](/docs/admin/configuration/challenges/proof-of-work/) challenge which makes clients grind away at cryptographic checksums in order to rate limit clients from connecting too eagerly.
|
||||
|
||||
:::note
|
||||
|
||||
In retrospect implementing the proof of work challenge may have been a mistake and it's likely to be supplanted by things like [Proof of React](https://github.com/TecharoHQ/anubis/pull/1038) or other methods that have yet to be developed. Your patience and polite behaviour in the bug tracker is appreciated.
|
||||
|
||||
:::
|
||||
|
||||
In order to make sure the proof of work challenge screen _goes away as fast as possible_, the [worker code](https://github.com/TecharoHQ/anubis/tree/main/web/js/worker) is optimized within an inch of its digital life. One of the main ways that this code is optimized is with how it's run. Over the last 10-20 years, the main way that CPUs have gotten fast is via increasing multicore performance. Anubis tries to make sure that it can use as many cores as possible in order to take advantage of your device's CPU as much as it can.
|
||||
|
||||
This strategy sometimes has some issues though, for one Firefox seems to get _much slower_ if you have Anubis try to absolutely saturate all of the cores on the system. It also has a fairly high overhead between JavaScript JIT code and [WebCrypto](https://developer.mozilla.org/en-US/docs/Web/API/Web_Crypto_API). I did some testing and found out that Firefox's point of diminishing returns was about half of the CPU cores.
|
||||
|
||||
## Another "invalid response" bug
|
||||
|
||||
One of the complaints I've been getting from users and administrators using Anubis is that they've been running into issues where users get randomly rejected with an error message only saying "invalid response". This happens when the challenge validating process fails. This issue has been blocking the release of the next version of Anubis.
|
||||
|
||||
In order to demonstrate this better, I've made a little interactive diagram for the proof of work process:
|
||||
|
||||
<ProofOfWorkDiagram />
|
||||
|
||||
I've fixed a lot of the easy bugs in Anubis by this point. A lot of what's left is the hard bugs, but also specifically the kinds of hard bugs that involve weird hardware configurations. In order to try and catch these issues before software hits prod, I test Anubis against a bunch of hardware I have locally. Any issues I find and fix before software ships are issues that you don't hit in production.
|
||||
|
||||
Let's consider [the line of code](https://github.com/TecharoHQ/anubis/blob/main/web/js/algorithms/fast.mjs) that was causing this issue:
|
||||
|
||||
```js
|
||||
threads = Math.max(navigator.hardwareConcurrency / 2, 1),
|
||||
```
|
||||
|
||||
This is intended to make your browser spawn a proof of work worker for _half_ of your available CPU cores. If you only have one CPU core, you should only have one worker. Each thread is given this number of threads and uses that to increment the nonce so that each thread doesn't try to find a solution that another worker has already performed.
|
||||
|
||||
One of the subtle problems here is that all of the parts of this assume that the thread ID and nonce are integers without a decimal portion. Famously, [all JavaScript numbers are IEEE 754 floating point numbers](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number). Surely there wouldn't be a case where the thread count could be a _decimal_ number, right?
|
||||
|
||||
Here's all the devices I use to test Anubis _and their core counts_:
|
||||
|
||||
| Device Name | Core Count |
|
||||
| :--------------------------- | :--------- |
|
||||
| MacBook Pro M3 Max | 16 |
|
||||
| MacBook Pro M4 Max | 16 |
|
||||
| AMD Ryzen 9 7950x3D | 32 |
|
||||
| Google Pixel 9a (GrapheneOS) | 8 |
|
||||
| iPhone 15 Pro Max | 6 |
|
||||
| iPad Pro (M1) | 8 |
|
||||
| iPad mini | 6 |
|
||||
| Steam Deck | 8 |
|
||||
| Core i5 10600 (homelab) | 12 |
|
||||
| ROG Ally | 16 |
|
||||
|
||||
Notice something? All of those devices have an _even_ number of cores. Some devices such as the [Pixel 8 Pro](https://www.gsmarena.com/google_pixel_8_pro-12545.php) have an _odd_ number of cores. So what happens with that line of code as the JavaScript engine evaluates it?
|
||||
|
||||
Let's replace the [`navigator.hardwareConcurrency`](https://developer.mozilla.org/en-US/docs/Web/API/Navigator/hardwareConcurrency) with the Pixel 8 Pro's 9 cores:
|
||||
|
||||
```js
|
||||
threads = Math.max(9 / 2, 1),
|
||||
```
|
||||
|
||||
Then divide it by two:
|
||||
|
||||
```js
|
||||
threads = Math.max(4.5, 1),
|
||||
```
|
||||
|
||||
Oops, that's not ideal. However `4.5` is bigger than `1`, so [`Math.max`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/max) returns that:
|
||||
|
||||
```js
|
||||
threads = 4.5,
|
||||
```
|
||||
|
||||
This means that each time the proof of work equation is calculated, there is a 50% chance that a valid solution would include a nonce with a decimal portion in it. If the client finds a solution with such a nonce, then it would think the client was successful and submit the solution to the server, but the server only expects whole numbers back so it rejects that as an invalid response.
|
||||
|
||||
I keep telling more junior people that when you have the weirdest, most inconsistent bugs in software that it's going to boil down to the dumbest possible thing you can possibly imagine. People don't believe me, then they encounter bugs like this. Then they suddenly believe me.
|
||||
|
||||
Here is the fix:
|
||||
|
||||
```js
|
||||
threads = Math.trunc(Math.max(navigator.hardwareConcurrency / 2, 1)),
|
||||
```
|
||||
|
||||
This uses [`Math.trunc`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/trunc) to truncate away the decimal portion so that the Pixel 8 Pro has `4` workers instead of `4.5` workers.
|
||||
|
||||
## Today I learned this was possible
|
||||
|
||||
This was a total "today I learned" moment. I didn't actually think that hardware vendors shipped processors with an odd number of cores, however if you look at the core geometry of the Pixel 8 Pro, it has _three_ tiers of processor cores:
|
||||
|
||||
| Core type | Core model | Number |
|
||||
| :----------------- | :------------------- | :----- |
|
||||
| High performance | 3 Ghz Cortex X3 | 1 |
|
||||
| Medium performance | 2.45 Ghz Cortex A715 | 4 |
|
||||
| High efficiency | 2.15 Cortex A510 | 4 |
|
||||
| Total | | 9 |
|
||||
|
||||
I guess every assumption that developers have about CPU design is probably wrong.
|
||||
|
||||
This probably isn't helped by the fact that for most of my career, the core count in phones has been largely irrelevant and most of the desktop / laptop CPUs I've had (where core count does matter) uses [simultaneous multithreading](https://en.wikipedia.org/wiki/Simultaneous_multithreading) to "multiply" the core count by two.
|
||||
|
||||
The client side fix is a bit of an "emergency stop" button to try and mitigate the badness as early as possible. In general I'm quite aware of the terrible UX involved with this flow failing and I'm still noodling through ways to make that UX better and easier for users / administrators to debug.
|
||||
|
||||
I'm looking into the following:
|
||||
|
||||
1. This could have been prevented on the server side by doing less strict input validation in compliance with [Postel's Law](https://en.wikipedia.org/wiki/Robustness_principle). I feel nervous about making such a security-sensitive endpoint _more liberal_ with the inputs it can accept, but it may be fine? I need to consult with a security expert.
|
||||
2. Showing an encrypted error message on the "invalid response" page so that the user and administrator can work together to fix or report the issue. I remember Google doing this at least once, but I can't recall where I've seen it in the past. Either way, this is probably the most robust method even though it would require developing some additional tooling. I think it would be worth it.
|
||||
|
||||
I'm likely going to go with the second option. I will need to figure out a good flow for this. It's likely going to involve [age](https://github.com/FiloSottile/age). I'll say more about this when I have more to say.
|
||||
|
||||
In the meantime though, looks like I need to expense a used Pixel 8 Pro to add to the testing jungle for Anubis. If anyone has a deal out there, please let me know!
|
||||
|
||||
Thank you to the people that have been polite and helpful when trying to root cause and fix this issue.
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 18 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 24 KiB |
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user