Compare commits

..

No commits in common. "master" and "v0.7.3" have entirely different histories.

44 changed files with 3035 additions and 6042 deletions

View File

@ -1,40 +0,0 @@
name: goreleaser
on:
push:
tags:
- 'v*.*.*'
workflow_dispatch:
jobs:
goreleaser:
runs-on: ubuntu-22.04
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
submodules: recursive
- name: Install upx
run: sudo apt install upx -y
continue-on-error: true
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: "1.20"
-
name: Run GoReleaser
uses: goreleaser/goreleaser-action@v4
with:
distribution: goreleaser
version: latest
args: release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GOPATH: "/home/runner/go"

3
.gitmodules vendored
View File

@ -1,3 +0,0 @@
[submodule "templates"]
path = templates
url = https://github.com/chainreactors/gogo-templates

View File

@ -1,68 +0,0 @@
project_name: spray
before:
hooks:
- go mod tidy
- go generate
builds:
-
main: .
binary: "{{ .ProjectName }}_{{ .Os }}_{{ .Arch }}"
goos:
- windows
- linux
- darwin
goarch:
- amd64
- "386"
- arm64
ignore:
- goos: windows
goarch: arm64
- goos: darwin
goarch: "386"
ldflags: "-s -w -X 'github.com/chainreactors/spray/cmd.ver=={{ .Tag }}'"
flags:
- -trimpath
asmflags:
- all=-trimpath={{.Env.GOPATH}}
gcflags:
- all=-trimpath={{.Env.GOPATH}}
no_unique_dist_dir: true
env:
- CGO_ENABLED=0
tags:
- forceposix
- osusergo
- netgo
upx:
-
enabled: true
goos: [linux, windows]
goarch:
- amd64
- "386"
archives:
-
name_template: "{{ .ProjectName }}_{{ .Os }}_{{ .Arch }}"
format: binary
checksum:
name_template: "{{ .ProjectName }}_checksums.txt"
changelog:
sort: desc
filters:
exclude:
- '^MERGE'
- "{{ .Tag }}"
- "^docs"
release:
github:
owner: chainreactors
name: spray
draft: true

View File

@ -1,29 +1,19 @@
# SPRAY
下一代目录爆破工具. 一个完整的目录爆破解决方案
blog posts:
针对path的反向代理, host的反向代理, cdn等中间件编写的高性能目录爆破工具.
- https://chainreactors.github.io/wiki/blog/2024/07/24/fingers-introduce/
- https://chainreactors.github.io/wiki/blog/2024/08/25/spray-best-practices/
![](https://socialify.git.ci/chainreactors/spray/image?description=1&font=Inter&forks=1&issues=1&language=1&name=1&owner=1&pattern=Circuit%20Board&pulls=1&stargazers=1&theme=Light)
<p align="center">
<a href="#features">Features</a>
<a href="#quickstart">QuickStart</a>
<a href="#make">Make</a>
<a href="https://chainreactors.github.io/wiki/spray/">Wiki</a>
</p>
复活了一些hashcat中的字典生成算法, 自由的构造字典, 进行基于path的http fuzz.
## Features
**最好用最智能最可控的目录爆破工具**
* 超强的性能, 在本地测试极限性能的场景下, 能超过ffuf与feroxbruster的性能50%以上. 实际情况受到网络的影响, 感受没有这么明确. 但在多目标下可以感受到明显的区别.
* 基于掩码的字典生成
* 基于规则的字典生成
* 动态智能过滤, 自定义过滤策略
* 全量[gogo](https://github.com/chainreactors/gogo)的指纹识别, 全量的[fingerprinthub](https://github.com/0x727/FingerprintHub),[wappalyzer](https://github.com/projectdiscovery/wappalyzergo)指纹
* 自定义信息提取, 内置敏感信息提取规则
* 动态智能过滤
* 全量gogo的指纹识别
* 自定义信息提取, 如ip,js, title, hash以及自定义的正则表达式
* 自定义过滤策略
* 自定义输出格式与内容
* *nix的命令行设计, 轻松与其他工具联动
* 多角度的自动被ban,被waf判断
@ -33,60 +23,26 @@ blog posts:
[**Document**](https://chainreactors.github.io/wiki/spray/start)
### 基本使用
**从字典中读取目录进行爆破**
基本使用, 从字典中读取目录进行爆破
`spray -u http://example.com -d wordlist1.txt -d wordlist2.txt`
**通过掩码生成字典进行爆破**
通过掩码生成字典进行爆破
`spray -u http://example.com -w "/aaa/bbb{?l#4}/ccc"`
**通过规则生成字典爆破**
规则文件格式参考hashcat的字典生成规则
通过规则生成字典爆破. 规则文件格式参考hashcat的字典生成规则
`spray -u http://example.com -r rule.txt -d 1.txt`
**批量爆破多个目标**
批量爆破
`spray -l url.txt -r rule.txt -d 1.txt`
**断点续传**
断点续传
`spray --resume stat.json`
### 高级用法
**check-only 模式**
类似ehole/httpx这类对单页面信息收集的模式. 会有针对性的性能优化. 默认使用[templates](https://github.com/chainreactors/templates/tree/master/fingers)指纹库. 可以使用`--finger`打开第三方指纹库的匹配
`spray -l url.txt --check-only`
**启用拓展指纹识别**
会进行主动探测常见的指纹目录, 并额外启用fingerprinthub与wappalyzer拓展指纹库
`spray -u http://example.com --finger `
**启用爬虫**
`spray -u http://example.com --crawl`
**扫描备份文件与常见通用文件**
`spray -u http://example.com --bak --common`
**启用所有插件**
`spray -u http://example.com -a`
**被动url收集**
参见: https://github.com/chainreactors/urlfounder
## Wiki
详细用法请见[wiki](https://chainreactors.github.io/wiki/spray/)
@ -96,40 +52,25 @@ https://chainreactors.github.io/wiki/spray/
## Make
```
git clone --recurse-submodules https://github.com/chainreactors/spray
git clone https://github.com/chainreactors/spray
cd spray
go mod tidy
git clone https://github.com/chainreactors/gogo-templates templates
# 这里没用使用类似gogo的子模块的方式, 因为spray仅依赖其中的http指纹
go generate
go build .
```
## Similar or related works
* [ffuf](https://github.com/ffuf/ffuf) 一款优秀的http fuzz工具, 与spray的功能有一定重合但并不完全相同
* [feroxbuster](https://github.com/epi052/feroxbuster) 在编写spray之前我最常使用的目录爆破工具, 但因为批量扫描与过滤配置不便的原因选择自行编写
* [dirsearch](https://github.com/maurosoria/dirsearch) 较早的目录爆破工具, 参考了部分字典生成与配色
* [httpx](https://github.com/projectdiscovery/httpx) http信息收集功能, 参考了通过脚本语言编写任意过滤条件的功能
* [gobuster](https://github.com/OJ/gobuster) 一款统一是go编写的爆破工具, 但不仅限于目录爆破
## TODO
1. [x] 模糊对比
2. [x] 断点续传
3. [x] 简易爬虫
4. [x] 支持http2
4. [ ] 支持http2
5. [ ] auto-tune, 自动调整并发数量
6. [x] 可自定义的递归配置
7. [x] 参考[feroxbuster](https://github.com/epi052/feroxbuster)的`--collect-backups`, 自动爆破有效目录的备份
8. [x] 支持socks/http代理, 不建议使用, 优先级较低. 代理的keep-alive会带来严重的性能下降
7. [x] 参考[fuzzuli](https://github.com/musana/fuzzuli), 实现备份文件字典生成器
8. [x] 参考[feroxbuster](https://github.com/epi052/feroxbuster)的`--collect-backups`, 自动爆破有效目录的备份
8. [ ] 支持socks/http代理, 不建议使用, 优先级较低. 代理的keep-alive会带来严重的性能下降
9. [ ] 云函数化, chainreactors工具链的通用分布式解决方案.
## Thanks
* [fuzzuli](https://github.com/musana/fuzzuli) 提供了一个备份文件字典生成思路
* [fingerprinthub](https://github.com/0x727/FingerprintHub) 作为指纹库的补充
* [wappalyzer](https://github.com/projectdiscovery/wappalyzergo) 作为指纹库补充
* [dirsearch](https://github.com/maurosoria/dirsearch) 提供了默认字典

View File

@ -3,61 +3,35 @@ package cmd
import (
"context"
"fmt"
"github.com/chainreactors/files"
"github.com/chainreactors/gogo/v2/pkg/fingers"
"github.com/chainreactors/gogo/v2/pkg/utils"
"github.com/chainreactors/logs"
"github.com/chainreactors/spray/core"
"github.com/chainreactors/spray/core/ihttp"
"github.com/chainreactors/spray/internal"
"github.com/chainreactors/spray/pkg"
"github.com/chainreactors/utils/iutils"
"github.com/chainreactors/spray/pkg/ihttp"
"github.com/jessevdk/go-flags"
"os"
"os/signal"
"regexp"
"syscall"
"time"
)
var ver = "dev"
var DefaultConfig = "config.yaml"
func init() {
logs.Log.SetColorMap(map[logs.Level]func(string) string{
logs.Info: logs.PurpleBold,
logs.Important: logs.GreenBold,
pkg.LogVerbose: logs.Green,
})
}
func Spray() {
var option core.Option
if files.IsExist(DefaultConfig) {
logs.Log.Debug("config.yaml exist, loading")
err := core.LoadConfig(DefaultConfig, &option)
if err != nil {
logs.Log.Error(err.Error())
return
}
}
var option internal.Option
parser := flags.NewParser(&option, flags.Default)
parser.Usage = `
WIKI: https://chainreactors.github.io/wiki/spray
QUICKSTART:
basic:
spray -u http://example.com
basic cidr and port:
spray -i example -p top2,top3
simple brute:
simple example:
spray -u http://example.com -d wordlist1.txt -d wordlist2.txt
mask-base brute with wordlist:
mask-base wordlist:
spray -u http://example.com -w "/aaa/bbb{?l#4}/ccc"
rule-base brute with wordlist:
rule-base wordlist:
spray -u http://example.com -r rule.txt -d 1.txt
list input spray:
@ -75,110 +49,48 @@ func Spray() {
return
}
// logs
logs.AddLevel(pkg.LogVerbose, "verbose", "[=] %s {{suffix}}\n")
if option.Debug {
logs.Log.SetLevel(logs.Debug)
} else if len(option.Verbose) > 0 {
logs.Log.SetLevel(pkg.LogVerbose)
}
if option.InitConfig {
configStr := core.InitDefaultConfig(&option, 0)
err := os.WriteFile(DefaultConfig, []byte(configStr), 0o744)
if err != nil {
logs.Log.Warn("cannot create config: config.yaml, " + err.Error())
return
}
if files.IsExist(DefaultConfig) {
logs.Log.Warn("override default config: ./config.yaml")
}
logs.Log.Info("init default config: ./config.yaml")
return
}
defer time.Sleep(time.Second)
if option.Config != "" {
err := core.LoadConfig(option.Config, &option)
if err != nil {
logs.Log.Error(err.Error())
return
}
if files.IsExist(DefaultConfig) {
logs.Log.Warnf("custom config %s, override default config", option.Config)
} else {
logs.Log.Important("load config: " + option.Config)
}
}
if option.Version {
fmt.Println(ver)
return
}
if option.PrintPreset {
err = pkg.Load()
if err != nil {
iutils.Fatal(err.Error())
}
err = pkg.LoadFingers()
if err != nil {
iutils.Fatal(err.Error())
}
core.PrintPreset()
return
}
if option.Format != "" {
core.Format(option)
return
internal.Format(option.Format)
os.Exit(0)
}
err = pkg.LoadTemplates()
if err != nil {
utils.Fatal(err.Error())
}
if option.Extracts != nil {
for _, e := range option.Extracts {
if reg, ok := fingers.PresetExtracts[e]; ok {
pkg.Extractors[e] = reg
} else {
pkg.Extractors[e] = regexp.MustCompile(e)
}
}
}
// 一些全局变量初始化
if option.Debug {
logs.Log.Level = logs.Debug
}
pkg.Distance = uint8(option.SimhashDistance)
ihttp.DefaultMaxBodySize = option.MaxBodyLength * 1024
if option.ReadAll {
ihttp.DefaultMaxBodySize = 0
}
var runner *internal.Runner
if option.ResumeFrom != "" {
runner, err = option.PrepareRunner()
} else {
runner, err = option.PrepareRunner()
}
err = option.Prepare()
if err != nil {
logs.Log.Errorf(err.Error())
return
}
runner, err := option.NewRunner()
if err != nil {
logs.Log.Errorf(err.Error())
return
}
if option.ReadAll || runner.CrawlPlugin {
ihttp.DefaultMaxBodySize = -1
}
ctx, canceler := context.WithTimeout(context.Background(), time.Duration(runner.Deadline)*time.Second)
go func() {
select {
case <-ctx.Done():
time.Sleep(10 * time.Second)
logs.Log.Errorf("deadline and timeout not work, hard exit!!!")
os.Exit(0)
}
}()
go func() {
exitChan := make(chan os.Signal, 2)
signal.Notify(exitChan, os.Interrupt, syscall.SIGTERM)
go func() {
sigCount := 0
for {
<-exitChan
sigCount++
if sigCount == 1 {
logs.Log.Infof("Exit signal received, saving task and exiting...")
canceler()
} else if sigCount == 2 {
logs.Log.Infof("forcing exit...")
os.Exit(1)
}
}
}()
}()
err = runner.Prepare(ctx)
if err != nil {
@ -186,4 +98,19 @@ func Spray() {
return
}
go func() {
c := make(chan os.Signal, 2)
signal.Notify(c, os.Interrupt, syscall.SIGTERM)
go func() {
<-c
fmt.Println("exit signal, save stat and exit")
canceler()
}()
}()
if runner.CheckOnly {
runner.RunWithCheck(ctx)
} else {
runner.Run(ctx)
}
}

View File

@ -1,156 +0,0 @@
input:
# Files, Multi,dict files, e.g.: -d 1.txt -d 2.txt
dictionaries: []
# Bool, no dictionary
no-dict: false
# String, word generate dsl, e.g.: -w test{?ld#4}
word: ""
# Files, rule files, e.g.: -r rule1.txt -r rule2.txt
rules: []
# Files, when found valid path , use append rule generator new word with current path
append-rules: []
# String, filter rule, e.g.: --rule-filter '>8 <4'
filter-rule: ""
# Files, when found valid path , use append file new word with current path
append-files: []
functions:
# String, add extensions (separated by commas), e.g.: -e jsp,jspx
extension: ""
# Bool, force add extensions
force-extension: false
# String, exclude extensions (separated by commas), e.g.: --exclude-extension jsp,jspx
exclude-extension: ""
# String, remove extensions (separated by commas), e.g.: --remove-extension jsp,jspx
remove-extension: ""
# Bool, upper wordlist, e.g.: --uppercase
upper: false
# Bool, lower wordlist, e.g.: --lowercase
lower: false
# Strings, add prefix, e.g.: --prefix aaa --prefix bbb
prefix: []
# Strings, add suffix, e.g.: --suffix aaa --suffix bbb
suffix: []
# Strings, replace string, e.g.: --replace aaa:bbb --replace ccc:ddd
replace: {}
# String, skip word when generate. rule, e.g.: --skip aaa
skip: []
output:
# String, custom match function, e.g.: --match 'current.Status != 200''
match: ""
# String, custom filter function, e.g.: --filter 'current.Body contains "hello"'
filter: ""
# String, open fuzzy output
fuzzy: false
# String, output filename
output-file: ""
# String, fuzzy output filename
fuzzy-file: ""
# String, dump all request, and write to filename
dump-file: ""
# Bool, dump all request
dump: false
# Bool, auto generator output and fuzzy filename
auto-file: false
# String, output format, e.g.: --format 1.json
format: ""
# String, output format
output_probe: ""
# Bool, Quiet
quiet: false
# Bool, no color
no-color: false
# Bool, No progress bar
no-bar: false
# Bool, No stat
no-stat: true
plugins:
# Bool, enable all plugin
all: false
# Strings, extract response, e.g.: --extract js --extract ip --extract version:(.*?)
extract: []
# String, extract config filename
extract-config: ""
# Bool, enable recon
recon: false
# Bool, enable active finger detect
finger: false
# Bool, enable bak found
bak: false
# Bool, enable valid result bak found, equal --append-rule rule/filebak.txt
file-bak: false
# Bool, enable common file found
common: false
# Bool, enable crawl
crawl: false
# Int, crawl depth
crawl-depth: 3
request:
# Strings, custom headers, e.g.: --headers 'Auth: example_auth'
headers: []
# String, custom user-agent, e.g.: --user-agent Custom
useragent: ""
# Bool, use random with default user-agent
random-useragent: false
# Strings, custom cookie
cookies: []
# Bool, read all response body
read-all: false
# Int, max response body length (kb), -1 read-all, 0 not read body, default 100k, e.g. --max-length 1000
max-length: 100
mode:
# Int, request rate limit (rate/s), e.g.: --rate-limit 100
rate-limit: 0
# Bool, skip error break
force: false
# Bool, check only
default: false
# Bool, no scope
no-scope: false
# String, custom scope, e.g.: --scope *.example.com
scope: []
# String,custom recursive rule, e.g.: --recursive current.IsDir()
recursive: current.IsDir()
# Int, recursive depth
depth: 0
# String, custom index path
index: /
# String, custom random path
random: ""
# Int, check period when request
check-period: 200
# Int, check period when error
error-period: 10
# Int, break when the error exceeds the threshold
error-threshold: 20
# Strings (comma split),custom black status
black-status: 400,410
# Strings (comma split), custom white status
white-status: 200
# Strings (comma split), custom fuzzy status
fuzzy-status: 500,501,502,503
# Strings (comma split), custom unique status
unique-status: 403,200,404
# Bool, unique response
unique: false
# Int, retry count
retry: 0
sim-distance: 5
misc:
# String, path/host spray
mod: path
# String, Client type
client: auto
# Int, deadline (seconds)
deadline: 999999
# Int, timeout with request (seconds)
timeout: 5
# Int, Pool size
pool: 5
# Int, number of threads per pool
thread: 20
# Bool, output debug info
debug: false
# Bool, log verbose level ,default 0, level1: -v level2 -vv
verbose: []
# String, proxy address, e.g.: --proxy socks5://127.0.0.1:1080
proxy: ""

View File

@ -1,264 +0,0 @@
package baseline
import (
"bytes"
"github.com/chainreactors/fingers/common"
"github.com/chainreactors/parsers"
"github.com/chainreactors/spray/core/ihttp"
"github.com/chainreactors/spray/pkg"
"github.com/chainreactors/utils/encode"
"github.com/chainreactors/utils/iutils"
"net/http"
"net/url"
"strconv"
"strings"
)
func NewBaseline(u, host string, resp *ihttp.Response) *Baseline {
var err error
bl := &Baseline{
SprayResult: &parsers.SprayResult{
UrlString: u,
Status: resp.StatusCode(),
IsValid: true,
Frameworks: make(common.Frameworks),
},
}
if t, ok := pkg.ContentTypeMap[resp.ContentType()]; ok {
bl.ContentType = t
bl.Title = t + " data"
} else {
bl.ContentType = "other"
}
header := resp.Header()
bl.Header = make([]byte, len(header))
copy(bl.Header, header)
bl.HeaderLength = len(bl.Header)
if i := resp.ContentLength(); ihttp.CheckBodySize(i) {
if body := resp.Body(); body != nil {
bl.Body = make([]byte, len(body))
copy(bl.Body, body)
}
if i == -1 {
bl.Chunked = true
bl.BodyLength = len(bl.Body)
} else {
bl.BodyLength = int(i)
}
}
bl.Raw = append(bl.Header, bl.Body...)
bl.Response, err = pkg.ParseRawResponse(bl.Raw)
if err != nil {
bl.IsValid = false
bl.Reason = pkg.ErrResponseError.Error()
bl.ErrString = err.Error()
return bl
}
if r := bl.Response.Header.Get("Location"); r != "" {
bl.RedirectURL = r
} else {
bl.RedirectURL = bl.Response.Header.Get("location")
}
bl.Dir = bl.IsDir()
uu, err := url.Parse(u)
if err == nil {
bl.Path = uu.Path
bl.Url = uu
if uu.Host != host {
bl.Host = host
}
} else {
bl.IsValid = false
bl.Reason = pkg.ErrUrlError.Error()
bl.ErrString = err.Error()
}
bl.Unique = UniqueHash(bl)
return bl
}
func NewInvalidBaseline(u, host string, resp *ihttp.Response, reason string) *Baseline {
bl := &Baseline{
SprayResult: &parsers.SprayResult{
UrlString: u,
Status: resp.StatusCode(),
IsValid: false,
Reason: reason,
},
}
// 无效数据也要读取body, 否则keep-alive不生效
resp.Body()
bl.BodyLength = int(resp.ContentLength())
bl.RedirectURL = string(resp.GetHeader("Location"))
bl.Dir = bl.IsDir()
uu, err := url.Parse(u)
if err == nil {
bl.Path = uu.Path
bl.Url = uu
} else {
return bl
}
if bl.Url.Host != host {
bl.Host = host
}
return bl
}
type Baseline struct {
*parsers.SprayResult
Url *url.URL `json:"-"`
Dir bool `json:"-"`
Chunked bool `json:"-"`
Body pkg.BS `json:"-"`
Header pkg.BS `json:"-"`
Raw pkg.BS `json:"-"`
Response *http.Response `json:"-"`
Recu bool `json:"-"`
RecuDepth int `json:"-"`
URLs []string `json:"-"`
Collected bool `json:"-"`
Retry int `json:"-"`
SameRedirectDomain bool `json:"-"`
IsBaseline bool `json:"-"`
}
func (bl *Baseline) IsDir() bool {
if strings.HasSuffix(bl.Path, "/") {
return true
}
return false
}
// Collect 深度收集信息
func (bl *Baseline) Collect() {
if bl.Collected { // 防止重复收集
return
} else {
bl.Collected = true
}
if bl.ContentType == "html" || bl.ContentType == "json" || bl.ContentType == "txt" {
// 指纹库设计的时候没考虑js,css文件的指纹, 跳过非必要的指纹收集减少误报提高性能
//fmt.Println(bl.Source, bl.Url.String()+bl.Path, bl.RedirectURL, "call fingersengine")
if pkg.EnableAllFingerEngine {
bl.Frameworks = pkg.EngineDetect(bl.Raw)
} else {
bl.Frameworks = pkg.FingersDetect(bl.Raw)
}
}
if len(bl.Body) > 0 {
if bl.ContentType == "html" {
bl.Title = iutils.AsciiEncode(parsers.MatchTitle(bl.Body))
} else if bl.ContentType == "ico" {
if frame := pkg.FingerEngine.Favicon().Match(bl.Body); frame != nil {
bl.Frameworks.Merge(frame)
}
}
}
bl.Hashes = parsers.NewHashes(bl.Raw)
bl.Extracteds.Merge(pkg.Extractors.Extract(string(bl.Raw), true))
bl.Unique = UniqueHash(bl)
}
func (bl *Baseline) CollectURL() {
if len(bl.Body) == 0 {
return
}
for _, reg := range pkg.ExtractRegexps["js"][0].CompiledRegexps {
urls := reg.FindAllStringSubmatch(string(bl.Body), -1)
for _, u := range urls {
u[1] = pkg.CleanURL(u[1])
if u[1] != "" && !pkg.FilterJs(u[1]) {
bl.URLs = append(bl.URLs, u[1])
}
}
}
for _, reg := range pkg.ExtractRegexps["url"][0].CompiledRegexps {
urls := reg.FindAllStringSubmatch(string(bl.Body), -1)
for _, u := range urls {
u[1] = pkg.CleanURL(u[1])
if u[1] != "" && !pkg.FilterUrl(u[1]) {
bl.URLs = append(bl.URLs, u[1])
}
}
}
bl.URLs = iutils.StringsUnique(bl.URLs)
if len(bl.URLs) != 0 {
bl.Extracteds = append(bl.Extracteds, &parsers.Extracted{
Name: "crawl",
ExtractResult: bl.URLs,
})
}
}
// Compare
// if totally equal return 1
// if maybe equal return 0
// not equal return -1
func (bl *Baseline) Compare(other *Baseline) int {
if other.RedirectURL != "" && bl.RedirectURL == other.RedirectURL {
// 如果重定向url不为空, 且与base不相同, 则说明不是同一个页面
return 1
}
if bl.BodyLength == other.BodyLength {
// 如果body length相等且md5相等, 则说明是同一个页面
if bytes.Equal(bl.Body, other.Body) {
// 如果length相等, md5也相等, 则判断为全同
return 1
} else {
// 如果长度相等, 但是md5不相等, 可能是存在csrftoken之类的随机值
return 0
}
} else if i := bl.BodyLength - other.BodyLength; (i < 16 && i > 0) || (i > -16 && i < 0) {
// 如果body length绝对值小于16, 则可能是存在csrftoken之类的随机值, 需要模糊判断
return 0
} else {
// 如果body length绝对值大于16, 则认为大概率存在较大差异
if strings.Contains(string(other.Body), other.Path) {
// 如果包含路径本身, 可能是路径自身的随机值影响结果
return 0
} else {
// 如果不包含路径本身, 则认为是不同页面
return -1
}
}
return -1
}
func (bl *Baseline) ProbeOutput(format []string) string {
var s strings.Builder
for _, f := range format {
s.WriteString("\t")
s.WriteString(bl.Get(f))
}
return strings.TrimSpace(s.String())
}
var Distance uint8 = 5 // 数字越小越相似, 数字为0则为完全一致.
func (bl *Baseline) FuzzyCompare(other *Baseline) bool {
// 这里使用rawsimhash, 是为了保证一定数量的字符串, 否则超短的body会导致simhash偏差指较大
if other.Distance = encode.SimhashCompare(other.RawSimhash, bl.RawSimhash); other.Distance < Distance {
return true
}
return false
}
func UniqueHash(bl *Baseline) uint16 {
// 由host+状态码+重定向url+content-type+title+length舍去个位组成的hash
// body length可能会导致一些误报, 目前没有更好的解决办法
return pkg.CRC16Hash([]byte(bl.Host + strconv.Itoa(bl.Status) + bl.RedirectURL + bl.ContentType + bl.Title + strconv.Itoa(bl.BodyLength/10*10)))
}

View File

@ -1,202 +0,0 @@
package core
import (
"fmt"
"github.com/gookit/config/v2"
"reflect"
"strconv"
"strings"
)
//var (
// defaultConfigPath = ".config/spray/"
// defaultConfigFile = "config.yaml"
//)
//
//func LoadDefault(v interface{}) {
// dir, err := os.UserHomeDir()
// if err != nil {
// logs.Log.Error(err.Error())
// return
// }
// if !files.IsExist(filepath.Join(dir, defaultConfigPath, defaultConfigFile)) {
// err := os.MkdirAll(filepath.Join(dir, defaultConfigPath), 0o700)
// if err != nil {
// logs.Log.Error(err.Error())
// return
// }
// f, err := os.Create(filepath.Join(dir, defaultConfigPath, defaultConfigFile))
// if err != nil {
// logs.Log.Error(err.Error())
// return
// }
// err = LoadConfig(filepath.Join(dir, defaultConfigPath, defaultConfigFile), v)
// if err != nil {
// logs.Log.Error(err.Error())
// return
// }
// var buf bytes.Buffer
// _, err = config.DumpTo(&buf, config.Yaml)
// if err != nil {
// logs.Log.Error(err.Error())
// return
// }
// fmt.Println(buf.String())
// f.Sync()
// }
//}
func LoadConfig(filename string, v interface{}) error {
err := config.LoadFiles(filename)
if err != nil {
return err
}
err = config.Decode(v)
if err != nil {
return err
}
return nil
}
func convertToFieldType(fieldType reflect.StructField, defaultVal string) interface{} {
switch fieldType.Type.Kind() {
case reflect.Bool:
val, err := strconv.ParseBool(defaultVal)
if err == nil {
return val
}
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
val, err := strconv.ParseInt(defaultVal, 10, 64)
if err == nil {
return val
}
case reflect.Float32, reflect.Float64:
val, err := strconv.ParseFloat(defaultVal, 64)
if err == nil {
return val
}
case reflect.String:
return defaultVal
// 可以根据需要扩展其他类型
}
return nil // 如果转换失败或类型不受支持返回nil
}
func setFieldValue(field reflect.Value) interface{} {
switch field.Kind() {
case reflect.Bool:
return false
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return 0
case reflect.Float32, reflect.Float64:
return 0.0
case reflect.Slice, reflect.Array:
return []interface{}{} // 返回一个空切片
case reflect.String:
return ""
case reflect.Struct:
return make(map[string]interface{})
default:
return nil
}
}
func extractConfigAndDefaults(v reflect.Value, result map[string]interface{}, comments map[string]string) {
t := v.Type()
for i := 0; i < v.NumField(); i++ {
field := v.Field(i)
fieldType := t.Field(i)
configTag := fieldType.Tag.Get("config")
defaultTag := fieldType.Tag.Get("default")
descriptionTag := fieldType.Tag.Get("description") // 读取description标签
if configTag != "" {
var value interface{}
if defaultTag != "" {
value = convertToFieldType(fieldType, defaultTag)
} else {
value = setFieldValue(field)
}
fullPath := configTag // 在递归情况下,您可能需要构建完整的路径
if field.Kind() == reflect.Struct {
nestedResult := make(map[string]interface{})
nestedComments := make(map[string]string)
extractConfigAndDefaults(field, nestedResult, nestedComments)
result[configTag] = nestedResult
for k, v := range nestedComments {
comments[fullPath+"."+k] = v // 保留嵌套注释的路径
}
} else {
result[configTag] = value
if descriptionTag != "" {
comments[fullPath] = descriptionTag
}
}
}
}
}
func InitDefaultConfig(cfg interface{}, indentLevel int) string {
var yamlStr strings.Builder
v := reflect.ValueOf(cfg)
if v.Kind() == reflect.Ptr {
v = v.Elem() // 解引用指针
}
t := v.Type()
for i := 0; i < v.NumField(); i++ {
field := v.Field(i)
fieldType := t.Field(i)
configTag := fieldType.Tag.Get("config")
if configTag == "" {
continue // 忽略没有config标签的字段
}
defaultTag := fieldType.Tag.Get("default")
descriptionTag := fieldType.Tag.Get("description")
// 添加注释
if descriptionTag != "" {
yamlStr.WriteString(fmt.Sprintf("%s# %s\n", strings.Repeat(" ", indentLevel*2), descriptionTag))
}
// 准备值
valueStr := prepareValue(fieldType.Type.Kind(), defaultTag)
// 根据字段类型进行处理
switch field.Kind() {
case reflect.Struct:
// 对于嵌套结构体递归生成YAML
yamlStr.WriteString(fmt.Sprintf("%s%s:\n%s", strings.Repeat(" ", indentLevel*2), configTag, InitDefaultConfig(field.Interface(), indentLevel+1)))
default:
// 直接生成键值对
yamlStr.WriteString(fmt.Sprintf("%s%s: %s\n", strings.Repeat(" ", indentLevel*2), configTag, valueStr))
}
}
return yamlStr.String()
}
// prepareValue 根据字段类型和default标签的值准备最终的值字符串
func prepareValue(kind reflect.Kind, defaultVal string) string {
if defaultVal != "" {
return defaultVal
}
// 根据类型返回默认空值
switch kind {
case reflect.Bool:
return "false"
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return "0"
case reflect.Float32, reflect.Float64:
return "0.0"
case reflect.Slice, reflect.Array:
return "[]"
case reflect.String:
return `""`
case reflect.Struct, reflect.Map:
return "{}"
default:
return `""`
}
}

View File

@ -1,181 +0,0 @@
package core
import (
"fmt"
"github.com/chainreactors/files"
"github.com/chainreactors/fingers"
"github.com/chainreactors/fingers/resources"
"github.com/chainreactors/logs"
"github.com/chainreactors/utils/encode"
"github.com/chainreactors/utils/iutils"
"io"
"net/http"
"os"
"path/filepath"
"strings"
)
var (
DefaultFingerPath = "fingers"
DefaultFingerTemplate = "fingers/templates"
FingerConfigs = map[string]string{
fingers.FingersEngine: "fingers_http.json.gz",
fingers.FingerPrintEngine: "fingerprinthub_v3.json.gz",
fingers.WappalyzerEngine: "wappalyzer.json.gz",
fingers.EHoleEngine: "ehole.json.gz",
fingers.GobyEngine: "goby.json.gz",
}
baseURL = "https://raw.githubusercontent.com/chainreactors/fingers/master/resources/"
)
type FingerOptions struct {
Finger bool `long:"finger" description:"Bool, enable active finger detect" config:"finger"`
FingerUpdate bool `long:"update" description:"Bool, update finger database" config:"update"`
FingerPath string `long:"finger-path" default:"fingers" description:"String, 3rd finger config path" config:"finger-path"`
//FingersTemplatesPath string `long:"finger-template" default:"fingers/templates" description:"Bool, use finger templates path" config:"finger-template"`
FingerEngines string `long:"finger-engine" default:"all" description:"String, custom finger engine, e.g. --finger-engine ehole,goby" config:"finger-engine"`
}
func (opt *FingerOptions) Validate() error {
var err error
if opt.FingerUpdate {
if opt.FingerPath != DefaultFingerPath && !files.IsExist(opt.FingerPath) {
err = os.MkdirAll(opt.FingerPath, 0755)
if err != nil {
return err
}
} else if !files.IsExist(DefaultFingerPath) {
opt.FingerPath = DefaultFingerPath
err = os.MkdirAll(DefaultFingerPath, 0755)
if err != nil {
return err
}
}
//if opt.FingersTemplatesPath != DefaultFingerTemplate && !files.IsExist(opt.FingersTemplatesPath) {
// err = os.MkdirAll(opt.FingersTemplatesPath, 0755)
// if err != nil {
// return err
// }
//} else if !files.IsExist(DefaultFingerTemplate) {
// err = os.MkdirAll(DefaultFingerTemplate, 0755)
// if err != nil {
// return err
// }
//}
}
if opt.FingerEngines != "all" {
for _, name := range strings.Split(opt.FingerEngines, ",") {
if !iutils.StringsContains(fingers.AllEngines, name) {
return fmt.Errorf("invalid finger engine: %s, please input one of %v", name, fingers.FingersEngine)
}
}
}
return nil
}
func (opt *FingerOptions) LoadLocalFingerConfig() error {
for name, fingerPath := range FingerConfigs {
if content, err := os.ReadFile(fingerPath); err == nil {
if encode.Md5Hash(content) != resources.CheckSum[name] {
logs.Log.Importantf("found %s difference, use %s replace embed", name, fingerPath)
switch name {
case fingers.FingersEngine:
resources.FingersHTTPData = content
case fingers.FingerPrintEngine:
resources.Fingerprinthubdata = content
case fingers.EHoleEngine:
resources.EholeData = content
case fingers.GobyEngine:
resources.GobyData = content
case fingers.WappalyzerEngine:
resources.WappalyzerData = content
default:
return fmt.Errorf("unknown engine name")
}
} else {
logs.Log.Infof("%s config is up to date", name)
}
}
}
return nil
}
func (opt *FingerOptions) UpdateFinger() error {
modified := false
for name, _ := range FingerConfigs {
if ok, err := opt.downloadConfig(name); err != nil {
return err
} else {
if ok {
modified = ok
}
}
}
if !modified {
logs.Log.Importantf("everything is up to date")
}
return nil
}
func (opt *FingerOptions) downloadConfig(name string) (bool, error) {
fingerFile, ok := FingerConfigs[name]
if !ok {
return false, fmt.Errorf("unknown engine name")
}
url := baseURL + fingerFile
resp, err := http.Get(url)
if err != nil {
return false, err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return false, fmt.Errorf("bad status: %s", resp.Status)
}
content, err := io.ReadAll(resp.Body)
filePath := filepath.Join(files.GetExcPath(), opt.FingerPath, fingerFile)
if files.IsExist(filePath) {
origin, err := os.ReadFile(filePath)
if err != nil {
return false, err
}
if resources.CheckSum[name] != encode.Md5Hash(origin) {
logs.Log.Importantf("update %s config from %s save to %s", name, url, fingerFile)
err = os.WriteFile(filePath, content, 0644)
if err != nil {
return false, err
}
return true, nil
}
} else {
out, err := os.Create(filePath)
if err != nil {
return false, err
}
defer out.Close()
logs.Log.Importantf("download %s config from %s save to %s", name, url, fingerFile)
err = os.WriteFile(filePath, content, 0644)
if err != nil {
return false, err
}
}
if err != nil {
return false, err
}
if origin, err := os.ReadFile(filePath); err == nil {
if encode.Md5Hash(content) != encode.Md5Hash(origin) {
logs.Log.Infof("download %s config from %s save to %s", name, url, fingerFile)
err = os.WriteFile(filePath, content, 0644)
if err != nil {
return false, err
}
return true, nil
}
}
return false, nil
}

View File

@ -1,92 +0,0 @@
package core
import (
"bytes"
"encoding/json"
"github.com/chainreactors/logs"
"github.com/chainreactors/spray/core/baseline"
"github.com/chainreactors/spray/pkg"
"github.com/chainreactors/words/mask"
"io"
"net/url"
"os"
"strings"
)
func Format(opts Option) {
var content []byte
var err error
if opts.Format == "stdin" {
content, err = io.ReadAll(os.Stdin)
} else {
content, err = os.ReadFile(opts.Format)
}
if err != nil {
return
}
group := make(map[string]map[string]*baseline.Baseline)
for _, line := range bytes.Split(bytes.TrimSpace(content), []byte("\n")) {
var result baseline.Baseline
err := json.Unmarshal(line, &result)
if err != nil {
logs.Log.Error(err.Error())
return
}
result.Url, err = url.Parse(result.UrlString)
if err != nil {
continue
}
if _, exists := group[result.Url.Host]; !exists {
group[result.Url.Host] = make(map[string]*baseline.Baseline)
}
group[result.Url.Host][result.Path] = &result
}
for _, results := range group {
for _, result := range results {
if !opts.Fuzzy && result.IsFuzzy {
continue
}
if opts.OutputProbe == "" {
if !opts.NoColor {
logs.Log.Console(result.ColorString() + "\n")
} else {
logs.Log.Console(result.String() + "\n")
}
} else {
probes := strings.Split(opts.OutputProbe, ",")
logs.Log.Console(result.ProbeOutput(probes) + "\n")
}
}
}
}
func PrintPreset() {
logs.Log.Console("internal rules:\n")
for name, rule := range pkg.Rules {
logs.Log.Consolef("\t%s\t%d rules\n", name, len(strings.Split(rule, "\n")))
}
logs.Log.Console("\ninternal dicts:\n")
for name, dict := range pkg.Dicts {
logs.Log.Consolef("\t%s\t%d items\n", name, len(dict))
}
logs.Log.Console("\ninternal words keyword:\n")
for name, words := range mask.SpecialWords {
logs.Log.Consolef("\t%s\t%d words\n", name, len(words))
}
logs.Log.Console("\ninternal extractor:\n")
for name, _ := range pkg.ExtractRegexps {
logs.Log.Consolef("\t%s\n", name)
}
logs.Log.Console("\ninternal fingers:\n")
for name, engine := range pkg.FingerEngine.EnginesImpl {
logs.Log.Consolef("\t%s\t%d fingerprints \n", name, engine.Len())
}
logs.Log.Consolef("\nload %d active path\n", len(pkg.ActivePath))
}

View File

@ -1,136 +0,0 @@
package ihttp
import (
"context"
"crypto/tls"
"fmt"
"github.com/chainreactors/proxyclient"
"github.com/valyala/fasthttp"
"net"
"net/http"
"time"
)
var (
DefaultMaxBodySize int64 = 1024 * 100 // 100k
)
func CheckBodySize(size int64) bool {
if DefaultMaxBodySize == -1 {
return true
}
if DefaultMaxBodySize == 0 {
return false
}
return size < DefaultMaxBodySize
}
const (
Auto = iota
FAST
STANDARD
)
func NewClient(config *ClientConfig) *Client {
var client *Client
if config.Type == FAST {
client = &Client{
fastClient: &fasthttp.Client{
TLSConfig: &tls.Config{
Renegotiation: tls.RenegotiateOnceAsClient,
InsecureSkipVerify: true,
},
Dial: customDialFunc(config.ProxyClient, config.Timeout),
MaxConnsPerHost: config.Thread * 3 / 2,
MaxIdleConnDuration: config.Timeout,
//MaxConnWaitTimeout: time.Duration(timeout) * time.Second,
ReadTimeout: config.Timeout,
WriteTimeout: config.Timeout,
ReadBufferSize: 16384, // 16k
MaxResponseBodySize: int(DefaultMaxBodySize),
NoDefaultUserAgentHeader: true,
DisablePathNormalizing: true,
DisableHeaderNamesNormalizing: true,
},
ClientConfig: config,
}
} else {
client = &Client{
standardClient: &http.Client{
Transport: &http.Transport{
DialContext: config.ProxyClient,
TLSClientConfig: &tls.Config{
Renegotiation: tls.RenegotiateNever,
InsecureSkipVerify: true,
},
TLSHandshakeTimeout: config.Timeout,
MaxConnsPerHost: config.Thread * 3 / 2,
IdleConnTimeout: config.Timeout,
ReadBufferSize: 16384, // 16k
},
Timeout: config.Timeout,
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
},
ClientConfig: config,
}
}
return client
}
type ClientConfig struct {
Type int
Timeout time.Duration
Thread int
ProxyClient proxyclient.Dial
}
type Client struct {
fastClient *fasthttp.Client
standardClient *http.Client
*ClientConfig
}
func (c *Client) TransToCheck() {
if c.fastClient != nil {
c.fastClient.MaxConnsPerHost = -1 // disable keepalive
} else if c.standardClient != nil {
c.standardClient.Transport.(*http.Transport).DisableKeepAlives = true // disable keepalive
}
}
func (c *Client) FastDo(req *fasthttp.Request) (*fasthttp.Response, error) {
resp := fasthttp.AcquireResponse()
err := c.fastClient.DoTimeout(req, resp, c.Timeout)
return resp, err
}
func (c *Client) StandardDo(req *http.Request) (*http.Response, error) {
return c.standardClient.Do(req)
}
func (c *Client) Do(req *Request) (*Response, error) {
if c.fastClient != nil {
resp, err := c.FastDo(req.FastRequest)
return &Response{FastResponse: resp, ClientType: FAST}, err
} else if c.standardClient != nil {
resp, err := c.StandardDo(req.StandardRequest)
return &Response{StandardResponse: resp, ClientType: STANDARD}, err
} else {
return nil, fmt.Errorf("not found client")
}
}
func customDialFunc(dialer proxyclient.Dial, timeout time.Duration) fasthttp.DialFunc {
if dialer == nil {
return func(addr string) (net.Conn, error) {
return fasthttp.DialTimeout(addr, timeout)
}
}
return func(addr string) (net.Conn, error) {
ctx, _ := context.WithTimeout(context.Background(), timeout)
return dialer.DialContext(ctx, "tcp", addr)
}
}

View File

@ -1,924 +0,0 @@
package core
import (
"bufio"
"errors"
"fmt"
"github.com/chainreactors/files"
"github.com/chainreactors/logs"
"github.com/chainreactors/parsers"
"github.com/chainreactors/proxyclient"
"github.com/chainreactors/spray/core/baseline"
"github.com/chainreactors/spray/core/ihttp"
"github.com/chainreactors/spray/core/pool"
"github.com/chainreactors/spray/pkg"
"github.com/chainreactors/utils"
"github.com/chainreactors/utils/iutils"
"github.com/chainreactors/words/mask"
"github.com/chainreactors/words/rule"
"github.com/charmbracelet/lipgloss"
"github.com/expr-lang/expr"
"github.com/vbauerster/mpb/v8"
"io/ioutil"
"net/http"
"net/url"
"os"
"path/filepath"
"regexp"
"strconv"
"strings"
"sync"
"time"
)
var (
DefaultThreads = 20
)
type Option struct {
InputOptions `group:"Input Options" config:"input" `
FunctionOptions `group:"Function Options" config:"functions" `
OutputOptions `group:"Output Options" config:"output"`
PluginOptions `group:"Plugin Options" config:"plugins"`
FingerOptions `group:"Finger Options" config:"finger"`
RequestOptions `group:"Request Options" config:"request"`
ModeOptions `group:"Modify Options" config:"mode"`
MiscOptions `group:"Miscellaneous Options" config:"misc"`
}
type InputOptions struct {
ResumeFrom string `long:"resume" description:"File, resume filename" `
Config string `short:"c" long:"config" description:"File, config filename"`
URL []string `short:"u" long:"url" description:"Strings, input baseurl, e.g.: http://google.com"`
URLFile string `short:"l" long:"list" description:"File, input filename"`
PortRange string `short:"p" long:"port" description:"String, input port range, e.g.: 80,8080-8090,db"`
CIDRs []string `short:"i" long:"cidr" description:"String, input cidr, e.g.: 1.1.1.1/24 "`
RawFile string `long:"raw" description:"File, input raw request filename"`
Dictionaries []string `short:"d" long:"dict" description:"Files, Multi,dict files, e.g.: -d 1.txt -d 2.txt" config:"dictionaries"`
DefaultDict bool `short:"D" long:"default" description:"Bool, use default dictionary" config:"default"`
Word string `short:"w" long:"word" description:"String, word generate dsl, e.g.: -w test{?ld#4}" config:"word"`
Rules []string `short:"r" long:"rules" description:"Files, rule files, e.g.: -r rule1.txt -r rule2.txt" config:"rules"`
AppendRule []string `short:"R" long:"append-rule" description:"Files, when found valid path , use append rule generator new word with current path" config:"append-rules"`
FilterRule string `long:"filter-rule" description:"String, filter rule, e.g.: --rule-filter '>8 <4'" config:"filter-rule"`
AppendFile []string `long:"append" description:"Files, when found valid path , use append file new word with current path" config:"append-files"`
Offset int `long:"offset" description:"Int, wordlist offset"`
Limit int `long:"limit" description:"Int, wordlist limit, start with offset. e.g.: --offset 1000 --limit 100"`
}
type FunctionOptions struct {
Extensions string `short:"e" long:"extension" description:"String, add extensions (separated by commas), e.g.: -e jsp,jspx" config:"extension"`
ForceExtension bool `long:"force-extension" description:"Bool, force add extensions" config:"force-extension"`
ExcludeExtensions string `long:"exclude-extension" description:"String, exclude extensions (separated by commas), e.g.: --exclude-extension jsp,jspx" config:"exclude-extension"`
RemoveExtensions string `long:"remove-extension" description:"String, remove extensions (separated by commas), e.g.: --remove-extension jsp,jspx" config:"remove-extension"`
Uppercase bool `short:"U" long:"uppercase" description:"Bool, upper wordlist, e.g.: --uppercase" config:"upper"`
Lowercase bool `short:"L" long:"lowercase" description:"Bool, lower wordlist, e.g.: --lowercase" config:"lower"`
Prefixes []string `long:"prefix" description:"Strings, add prefix, e.g.: --prefix aaa --prefix bbb" config:"prefix"`
Suffixes []string `long:"suffix" description:"Strings, add suffix, e.g.: --suffix aaa --suffix bbb" config:"suffix"`
Replaces map[string]string `long:"replace" description:"Strings, replace string, e.g.: --replace aaa:bbb --replace ccc:ddd" config:"replace"`
Skips []string `long:"skip" description:"String, skip word when generate. rule, e.g.: --skip aaa" config:"skip"`
//SkipEval string `long:"skip-eval" description:"String, skip word when generate. rule, e.g.: --skip-eval 'current.Length < 4'"`
}
type OutputOptions struct {
Match string `long:"match" description:"String, custom match function, e.g.: --match 'current.Status != 200''" config:"match" `
Filter string `long:"filter" description:"String, custom filter function, e.g.: --filter 'current.Body contains \"hello\"'" config:"filter"`
Fuzzy bool `long:"fuzzy" description:"String, open fuzzy output" config:"fuzzy"`
OutputFile string `short:"f" long:"file" description:"String, output filename" json:"output_file,omitempty" config:"output-file"`
DumpFile string `long:"dump-file" description:"String, dump all request, and write to filename" config:"dump-file"`
Dump bool `long:"dump" description:"Bool, dump all request" config:"dump"`
AutoFile bool `long:"auto-file" description:"Bool, auto generator output and fuzzy filename" config:"auto-file"`
Format string `short:"F" long:"format" description:"String, output format, e.g.: --format 1.json" config:"format"`
Json bool `short:"j" long:"json" description:"Bool, output json" config:"json"`
FileOutput string `short:"O" long:"file-output" default:"json" description:"Bool, file output format" config:"file_output"`
OutputProbe string `short:"o" long:"probe" description:"String, output format" config:"output"`
Quiet bool `short:"q" long:"quiet" description:"Bool, Quiet" config:"quiet"`
NoColor bool `long:"no-color" description:"Bool, no color" config:"no-color"`
NoBar bool `long:"no-bar" description:"Bool, No progress bar" config:"no-bar"`
NoStat bool `long:"no-stat" description:"Bool, No stat file output" config:"no-stat"`
}
type RequestOptions struct {
Method string `short:"X" long:"method" default:"GET" description:"String, request method, e.g.: --method POST" config:"method"`
Headers []string `short:"H" long:"header" description:"Strings, custom headers, e.g.: --header 'Auth: example_auth'" config:"headers"`
UserAgent string `long:"user-agent" description:"String, custom user-agent, e.g.: --user-agent Custom" config:"useragent"`
RandomUserAgent bool `long:"random-agent" description:"Bool, use random with default user-agent" config:"random-useragent"`
Cookie []string `long:"cookie" description:"Strings, custom cookie" config:"cookies"`
ReadAll bool `long:"read-all" description:"Bool, read all response body" config:"read-all"`
MaxBodyLength int64 `long:"max-length" default:"100" description:"Int, max response body length (kb), -1 read-all, 0 not read body, default 100k, e.g. --max-length 1000" config:"max-length"`
}
type PluginOptions struct {
Advance bool `short:"a" long:"advance" description:"Bool, enable all plugin" config:"all" `
Extracts []string `long:"extract" description:"Strings, extract response, e.g.: --extract js --extract ip --extract version:(.*?)" config:"extract"`
ExtractConfig string `long:"extract-config" description:"String, extract config filename" config:"extract-config"`
ActivePlugin bool `long:"active" description:"Bool, enable active finger path"`
ReconPlugin bool `long:"recon" description:"Bool, enable recon" config:"recon"`
BakPlugin bool `long:"bak" description:"Bool, enable bak found" config:"bak"`
FuzzuliPlugin bool `long:"fuzzuli" description:"Bool, enable fuzzuli plugin" config:"fuzzuli"`
CommonPlugin bool `long:"common" description:"Bool, enable common file found" config:"common"`
CrawlPlugin bool `long:"crawl" description:"Bool, enable crawl" config:"crawl"`
CrawlDepth int `long:"crawl-depth" default:"3" description:"Int, crawl depth" config:"crawl-depth"`
AppendDepth int `long:"append-depth" default:"2" description:"Int, append depth" config:"append-depth"`
}
type ModeOptions struct {
RateLimit int `long:"rate-limit" default:"0" description:"Int, request rate limit (rate/s), e.g.: --rate-limit 100" config:"rate-limit"`
Force bool `long:"force" description:"Bool, skip error break" config:"force"`
NoScope bool `long:"no-scope" description:"Bool, no scope" config:"no-scope"`
Scope []string `long:"scope" description:"String, custom scope, e.g.: --scope *.example.com" config:"scope"`
Recursive string `long:"recursive" default:"current.IsDir()" description:"String,custom recursive rule, e.g.: --recursive current.IsDir()" config:"recursive"`
Depth int `long:"depth" default:"0" description:"Int, recursive depth" config:"depth"`
Index string `long:"index" default:"/" description:"String, custom index path" config:"index"`
Random string `long:"random" default:"" description:"String, custom random path" config:"random"`
CheckPeriod int `long:"check-period" default:"200" description:"Int, check period when request" config:"check-period"`
ErrPeriod int `long:"error-period" default:"10" description:"Int, check period when error" config:"error-period"`
BreakThreshold int `long:"error-threshold" default:"20" description:"Int, break when the error exceeds the threshold" config:"error-threshold"`
BlackStatus string `short:"B" long:"black-status" default:"400,410" description:"Strings (comma split),custom black status" config:"black-status"`
WhiteStatus string `short:"W" long:"white-status" default:"200" description:"Strings (comma split), custom white status" config:"white-status"`
FuzzyStatus string `long:"fuzzy-status" default:"500,501,502,503,301,302,404" description:"Strings (comma split), custom fuzzy status" config:"fuzzy-status"`
UniqueStatus string `long:"unique-status" default:"403,200,404" description:"Strings (comma split), custom unique status" config:"unique-status"`
Unique bool `long:"unique" description:"Bool, unique response" config:"unique"`
RetryCount int `long:"retry" default:"0" description:"Int, retry count" config:"retry"`
SimhashDistance int `long:"sim-distance" default:"8" config:"sim-distance"`
}
type MiscOptions struct {
Mod string `short:"m" long:"mod" default:"path" choice:"path" choice:"host" description:"String, path/host spray" config:"mod"`
Client string `short:"C" long:"client" default:"auto" choice:"fast" choice:"standard" choice:"auto" description:"String, Client type" config:"client"`
Deadline int `long:"deadline" default:"999999" description:"Int, deadline (seconds)" config:"deadline"` // todo 总的超时时间,适配云函数的deadline
Timeout int `short:"T" long:"timeout" default:"5" description:"Int, timeout with request (seconds)" config:"timeout"`
PoolSize int `short:"P" long:"pool" default:"5" description:"Int, Pool size" config:"pool"`
Threads int `short:"t" long:"thread" default:"20" description:"Int, number of threads per pool" config:"thread"`
Debug bool `long:"debug" description:"Bool, output debug info" config:"debug"`
Version bool `long:"version" description:"Bool, show version"`
Verbose []bool `short:"v" description:"Bool, log verbose level ,default 0, level1: -v level2 -vv " config:"verbose"`
Proxies []string `long:"proxy" description:"String, proxy address, e.g.: --proxy socks5://127.0.0.1:1080" config:"proxies"`
InitConfig bool `long:"init" description:"Bool, init config file"`
PrintPreset bool `long:"print" description:"Bool, print preset all preset config "`
}
func (opt *Option) Validate() error {
if opt.Uppercase && opt.Lowercase {
return errors.New("cannot set -U and -L at the same time")
}
if (opt.Offset != 0 || opt.Limit != 0) && opt.Depth > 0 {
// 偏移和上限与递归同时使用时也会造成混淆.
return errors.New("--offset and --limit cannot be used with --depth at the same time")
}
if opt.Depth > 0 && opt.ResumeFrom != "" {
// 递归与断点续传会造成混淆, 断点续传的word与rule不是通过命令行获取的
return errors.New("--resume and --depth cannot be used at the same time")
}
if opt.ResumeFrom == "" && len(opt.URL) == 0 && opt.URLFile == "" && len(opt.CIDRs) == 0 && opt.RawFile == "" {
return fmt.Errorf("without any target, please use -u/-l/-c/--resume to set targets")
}
return nil
}
func (opt *Option) Prepare() error {
var err error
logs.Log.SetColor(true)
if err = opt.FingerOptions.Validate(); err != nil {
return err
}
if opt.FingerUpdate {
err = opt.UpdateFinger()
if err != nil {
return err
}
}
err = opt.LoadLocalFingerConfig()
if err != nil {
return err
}
err = opt.Validate()
if err != nil {
return err
}
err = pkg.LoadFingers()
if err != nil {
return err
}
err = pkg.Load()
if err != nil {
return err
}
if opt.Extracts != nil {
for _, e := range opt.Extracts {
if reg, ok := pkg.ExtractRegexps[e]; ok {
pkg.Extractors[e] = reg
} else {
pkg.Extractors[e] = []*parsers.Extractor{
&parsers.Extractor{
Name: e,
CompiledRegexps: []*regexp.Regexp{regexp.MustCompile(e)},
},
}
}
}
}
if opt.ExtractConfig != "" {
extracts, err := pkg.LoadExtractorConfig(opt.ExtractConfig)
if err != nil {
return err
}
pkg.Extractors[opt.ExtractConfig] = extracts
}
// 初始化全局变量
baseline.Distance = uint8(opt.SimhashDistance)
if opt.MaxBodyLength == -1 {
ihttp.DefaultMaxBodySize = -1
} else {
ihttp.DefaultMaxBodySize = opt.MaxBodyLength * 1024
}
pkg.BlackStatus = pkg.ParseStatus(pkg.DefaultBlackStatus, opt.BlackStatus)
pkg.WhiteStatus = pkg.ParseStatus(pkg.DefaultWhiteStatus, opt.WhiteStatus)
if opt.FuzzyStatus == "all" {
pool.EnableAllFuzzy = true
} else {
pkg.FuzzyStatus = pkg.ParseStatus(pkg.DefaultFuzzyStatus, opt.FuzzyStatus)
}
if opt.Unique {
pool.EnableAllUnique = true
} else {
pkg.UniqueStatus = pkg.ParseStatus(pkg.DefaultUniqueStatus, opt.UniqueStatus)
}
logs.Log.Logf(pkg.LogVerbose, "Black Status: %v, WhiteStatus: %v, WAFStatus: %v", pkg.BlackStatus, pkg.WhiteStatus, pkg.WAFStatus)
logs.Log.Logf(pkg.LogVerbose, "Fuzzy Status: %v, Unique Status: %v", pkg.FuzzyStatus, pkg.UniqueStatus)
return nil
}
func (opt *Option) NewRunner() (*Runner, error) {
var err error
r := &Runner{
Option: opt,
taskCh: make(chan *Task),
outputCh: make(chan *baseline.Baseline, 256),
poolwg: &sync.WaitGroup{},
outwg: &sync.WaitGroup{},
fuzzyCh: make(chan *baseline.Baseline, 256),
Headers: make(map[string]string),
Total: opt.Limit,
Color: true,
}
// log and bar
if opt.NoColor {
logs.Log.SetColor(false)
r.Color = false
}
if opt.Quiet {
logs.Log.SetQuiet(true)
logs.Log.SetColor(false)
r.Color = false
}
if !(opt.Quiet || opt.NoBar) {
r.Progress = mpb.New(mpb.WithRefreshRate(100 * time.Millisecond))
logs.Log.SetOutput(r.Progress)
}
// configuration
if opt.Force {
// 如果开启了force模式, 将关闭check机制, err积累到一定数量自动退出机制
r.BreakThreshold = MAX
r.CheckPeriod = MAX
r.ErrPeriod = MAX
}
// 选择client
if opt.Client == "auto" {
r.ClientType = ihttp.Auto
} else if opt.Client == "fast" {
r.ClientType = ihttp.FAST
} else if opt.Client == "standard" || opt.Client == "base" || opt.Client == "http" {
r.ClientType = ihttp.STANDARD
}
if len(opt.Proxies) > 0 {
urls, err := proxyclient.ParseProxyURLs(opt.Proxies)
if err != nil {
return nil, err
}
r.ProxyClient, err = proxyclient.NewClientChain(urls)
if err != nil {
return nil, err
}
}
err = opt.BuildPlugin(r)
if err != nil {
return nil, err
}
err = opt.BuildWords(r)
if err != nil {
return nil, err
}
if opt.Threads == DefaultThreads && r.bruteMod {
r.Threads = 1000
}
pkg.DefaultStatistor = pkg.Statistor{
Word: opt.Word,
WordCount: len(r.Wordlist),
Dictionaries: opt.Dictionaries,
Offset: opt.Offset,
RuleFiles: opt.Rules,
RuleFilter: opt.FilterRule,
Total: r.Total,
}
r.Tasks, err = opt.BuildTasks(r)
if err != nil {
return nil, err
}
if opt.Match != "" {
exp, err := expr.Compile(opt.Match)
if err != nil {
return nil, err
}
r.MatchExpr = exp
}
if opt.Filter != "" {
exp, err := expr.Compile(opt.Filter)
if err != nil {
return nil, err
}
r.FilterExpr = exp
}
// 初始化递归
var express string
if opt.Recursive != "current.IsDir()" && opt.Depth != 0 {
// 默认不打开递归, 除非指定了非默认的递归表达式
opt.Depth = 1
express = opt.Recursive
}
if opt.Depth != 0 {
// 手动设置的depth优先级高于默认
express = opt.Recursive
}
if express != "" {
exp, err := expr.Compile(express)
if err != nil {
return nil, err
}
r.RecursiveExpr = exp
}
// prepare header
for _, h := range opt.Headers {
i := strings.Index(h, ":")
if i == -1 {
logs.Log.Warn("invalid header")
} else {
r.Headers[h[:i]] = h[i+2:]
}
}
if opt.UserAgent != "" {
r.Headers["User-Agent"] = opt.UserAgent
}
if opt.Cookie != nil {
r.Headers["Cookie"] = strings.Join(opt.Cookie, "; ")
}
if opt.OutputProbe != "" {
r.Probes = strings.Split(opt.OutputProbe, ",")
}
if !opt.Quiet {
fmt.Println(opt.PrintConfig(r))
}
// init output file
if opt.OutputFile != "" {
r.OutputFile, err = files.NewFile(opt.OutputFile, false, false, true)
if err != nil {
return nil, err
}
} else if opt.AutoFile {
r.OutputFile, err = files.NewFile("result.json", false, false, true)
if err != nil {
return nil, err
}
}
if opt.DumpFile != "" {
r.DumpFile, err = files.NewFile(opt.DumpFile, false, false, true)
if err != nil {
return nil, err
}
} else if opt.Dump {
r.DumpFile, err = files.NewFile("dump.json", false, false, true)
if err != nil {
return nil, err
}
}
if opt.ResumeFrom != "" {
r.StatFile, err = files.NewFile(opt.ResumeFrom, false, true, true)
}
if err != nil {
return nil, err
}
if !opt.NoStat {
r.StatFile, err = files.NewFile(pkg.SafeFilename(r.Tasks.Name)+".stat", false, true, true)
r.StatFile.Mod = os.O_WRONLY | os.O_CREATE
err = r.StatFile.Init()
if err != nil {
return nil, err
}
}
return r, nil
}
func (opt *Option) PrintConfig(r *Runner) string {
// 定义颜色样式
keyStyle := lipgloss.NewStyle().Bold(true).Foreground(lipgloss.Color("#FFFFFF")).Width(20) // Key 加粗并设定宽度
stringValueStyle := lipgloss.NewStyle().Foreground(lipgloss.Color("#FFA07A")) // 字符串样式
arrayValueStyle := lipgloss.NewStyle().Foreground(lipgloss.Color("#98FB98")) // 数组样式
numberValueStyle := lipgloss.NewStyle().Foreground(lipgloss.Color("#ADD8E6")) // 数字样式
panelWidth := 60 // 调整 panelWidth 使内容稍微靠左
padding := 2 // 减少 padding 以调整布局靠左
// 分割线样式和终端宽度计算
divider := strings.Repeat("─", panelWidth) // 使用"─"符号生成更加连贯的分割线
// 处理不同类型的值
formatValue := func(value interface{}) string {
switch v := value.(type) {
case string:
return stringValueStyle.Render(v)
case []string:
return arrayValueStyle.Render(fmt.Sprintf("%v", v))
case int, int64, float64:
return numberValueStyle.Render(fmt.Sprintf("%v", v))
default:
return stringValueStyle.Render(fmt.Sprintf("%v", v)) // 默认为字符串样式
}
}
// 处理互斥参数,选择输出有值的那一个
inputSource := ""
if opt.ResumeFrom != "" {
inputSource = lipgloss.JoinHorizontal(lipgloss.Left, "🌀 ", keyStyle.Render("ResumeFrom: "), formatValue(opt.ResumeFrom))
} else if len(opt.URL) > 0 {
inputSource = lipgloss.JoinHorizontal(lipgloss.Left, "🌐 ", keyStyle.Render("URL: "), formatValue(opt.URL))
} else if opt.URLFile != "" {
inputSource = lipgloss.JoinHorizontal(lipgloss.Left, "📂 ", keyStyle.Render("URLFile: "), formatValue(opt.URLFile))
} else if len(opt.CIDRs) > 0 {
inputSource = lipgloss.JoinHorizontal(lipgloss.Left, "📡 ", keyStyle.Render("CIDRs: "), formatValue(opt.CIDRs))
} else if opt.RawFile != "" {
inputSource = lipgloss.JoinHorizontal(lipgloss.Left, "📄 ", keyStyle.Render("RawFile: "), formatValue(opt.RawFile))
}
// Input Options
inputOptions := lipgloss.JoinVertical(lipgloss.Left,
inputSource, // 互斥量处理
// PortRange 展示
lipgloss.JoinHorizontal(lipgloss.Left, "🔢 ", keyStyle.Render("PortRange: "), formatValue(opt.PortRange)),
// Dictionaries 展示
lipgloss.JoinHorizontal(lipgloss.Left, "📚 ", keyStyle.Render("Dictionaries: "), formatValue(opt.Dictionaries)),
// Word, Rules, FilterRule 展开为单独的行
lipgloss.JoinVertical(lipgloss.Left,
lipgloss.JoinHorizontal(lipgloss.Left, "💡 ", keyStyle.Render("Word: "), formatValue(r.Word)),
lipgloss.JoinHorizontal(lipgloss.Left, "📜 ", keyStyle.Render("Rules: "), formatValue(opt.Rules)),
lipgloss.JoinHorizontal(lipgloss.Left, "🔍 ", keyStyle.Render("FilterRule: "), formatValue(opt.FilterRule)),
),
// AppendRule 和 AppendWords 展开为单独的行
lipgloss.JoinVertical(lipgloss.Left,
lipgloss.JoinHorizontal(lipgloss.Left, "🔧 ", keyStyle.Render("AppendRule: "), formatValue(r.AppendRule)),
lipgloss.JoinHorizontal(lipgloss.Left, "🧩 ", keyStyle.Render("AppendWords: "), formatValue(len(r.AppendWords))),
),
)
// Output Options
outputOptions := lipgloss.JoinVertical(lipgloss.Left,
lipgloss.JoinHorizontal(lipgloss.Left, "📊 ", keyStyle.Render("Match: "), formatValue(opt.Match)),
lipgloss.JoinHorizontal(lipgloss.Left, "⚙️ ", keyStyle.Render("Filter: "), formatValue(opt.Filter)),
)
// Plugin Options
pluginValues := []string{}
if opt.ActivePlugin {
pluginValues = append(pluginValues, "active")
}
if opt.ReconPlugin {
pluginValues = append(pluginValues, "recon")
}
if opt.BakPlugin {
pluginValues = append(pluginValues, "bak")
}
if opt.FuzzuliPlugin {
pluginValues = append(pluginValues, "fuzzuli")
}
if opt.CommonPlugin {
pluginValues = append(pluginValues, "common")
}
if opt.CrawlPlugin {
pluginValues = append(pluginValues, "crawl")
}
pluginOptions := lipgloss.JoinVertical(lipgloss.Left,
lipgloss.JoinHorizontal(lipgloss.Left, "🔎 ", keyStyle.Render("Extracts: "), formatValue(opt.Extracts)),
lipgloss.JoinHorizontal(lipgloss.Left, "🔌 ", keyStyle.Render("Plugins: "), formatValue(strings.Join(pluginValues, ", "))),
)
// Mode Options
modeOptions := lipgloss.JoinVertical(lipgloss.Left,
lipgloss.JoinHorizontal(lipgloss.Left, "🛑 ", keyStyle.Render("BlackStatus: "), formatValue(pkg.BlackStatus)),
lipgloss.JoinHorizontal(lipgloss.Left, "✅ ", keyStyle.Render("WhiteStatus: "), formatValue(pkg.WhiteStatus)),
lipgloss.JoinHorizontal(lipgloss.Left, "🔄 ", keyStyle.Render("FuzzyStatus: "), formatValue(pkg.FuzzyStatus)),
lipgloss.JoinHorizontal(lipgloss.Left, "🔒 ", keyStyle.Render("UniqueStatus: "), formatValue(pkg.UniqueStatus)),
lipgloss.JoinHorizontal(lipgloss.Left, "🔑 ", keyStyle.Render("Unique: "), formatValue(opt.Unique)),
)
// Misc Options
miscOptions := lipgloss.JoinVertical(lipgloss.Left,
lipgloss.JoinHorizontal(lipgloss.Left, "⏱ ", keyStyle.Render("Timeout: "), formatValue(opt.Timeout)),
lipgloss.JoinHorizontal(lipgloss.Left, "📈 ", keyStyle.Render("PoolSize: "), formatValue(opt.PoolSize)),
lipgloss.JoinHorizontal(lipgloss.Left, "🧵 ", keyStyle.Render("Threads: "), formatValue(opt.Threads)),
lipgloss.JoinHorizontal(lipgloss.Left, "🌍 ", keyStyle.Render("Proxies: "), formatValue(opt.Proxies)),
)
// 将所有内容拼接在一起
content := lipgloss.JoinVertical(lipgloss.Left,
inputOptions,
outputOptions,
pluginOptions,
modeOptions,
miscOptions,
)
// 使用正确的方式添加 padding并居中显示内容
contentWithPadding := lipgloss.NewStyle().PaddingLeft(padding).Render(content)
// 使用 Place 方法来将整个内容居中显示
return lipgloss.Place(panelWidth+padding*2, 0, lipgloss.Center, lipgloss.Center,
lipgloss.JoinVertical(lipgloss.Center,
divider, // 顶部分割线
contentWithPadding,
divider, // 底部分割线
),
)
}
func (opt *Option) BuildPlugin(r *Runner) error {
// brute only
if opt.Advance {
opt.CrawlPlugin = true
opt.Finger = true
opt.BakPlugin = true
opt.FuzzuliPlugin = true
opt.CommonPlugin = true
opt.ActivePlugin = true
opt.ReconPlugin = true
}
if opt.ReconPlugin {
pkg.Extractors["recon"] = pkg.ExtractRegexps["pentest"]
}
if opt.Finger {
pkg.EnableAllFingerEngine = true
}
if opt.BakPlugin {
r.bruteMod = true
opt.AppendRule = append(opt.AppendRule, "filebak")
r.AppendWords = append(r.AppendWords, pkg.GetPresetWordList([]string{"bak_file"})...)
}
if opt.CommonPlugin {
r.bruteMod = true
r.AppendWords = append(r.AppendWords, pkg.Dicts["common"]...)
r.AppendWords = append(r.AppendWords, pkg.Dicts["log"]...)
}
if opt.ActivePlugin {
r.bruteMod = true
r.AppendWords = append(r.AppendWords, pkg.ActivePath...)
}
if opt.CrawlPlugin {
r.bruteMod = true
}
if r.bruteMod {
logs.Log.Important("enabling brute mod, because of enabled brute plugin")
}
if opt.NoScope {
r.Scope = []string{"*"}
}
return nil
}
func (opt *Option) BuildWords(r *Runner) error {
var dicts [][]string
var err error
if opt.DefaultDict {
dicts = append(dicts, pkg.Dicts["default"])
logs.Log.Info("use default dictionary: https://github.com/maurosoria/dirsearch/blob/master/db/dicc.txt")
}
for i, f := range opt.Dictionaries {
dict, err := pkg.LoadFileToSlice(f)
if err != nil {
return err
}
dicts = append(dicts, dict)
if opt.ResumeFrom != "" {
pkg.Dicts[f] = dicts[i]
}
logs.Log.Logf(pkg.LogVerbose, "Loaded %d word from %s", len(dict), f)
}
if len(dicts) == 0 && opt.Word == "" && len(opt.Rules) == 0 && len(opt.AppendRule) == 0 {
r.IsCheck = true
}
if opt.Word == "" {
opt.Word = "{?"
for i, _ := range dicts {
opt.Word += strconv.Itoa(i)
}
opt.Word += "}"
}
if len(opt.Suffixes) != 0 {
mask.SpecialWords["suffix"] = opt.Suffixes
opt.Word += "{?@suffix}"
}
if len(opt.Prefixes) != 0 {
mask.SpecialWords["prefix"] = opt.Prefixes
opt.Word = "{?@prefix}" + opt.Word
}
if opt.ForceExtension && opt.Extensions != "" {
exts := strings.Split(opt.Extensions, ",")
for i, e := range exts {
if !strings.HasPrefix(e, ".") {
exts[i] = "." + e
}
}
mask.SpecialWords["ext"] = exts
opt.Word += "{?@ext}"
}
r.Wordlist, err = mask.Run(opt.Word, dicts, nil)
if err != nil {
return fmt.Errorf("%s %w", opt.Word, err)
}
if len(r.Wordlist) > 0 {
logs.Log.Logf(pkg.LogVerbose, "Parsed %d words by %s", len(r.Wordlist), opt.Word)
}
if len(opt.Rules) != 0 {
rules, err := pkg.LoadRuleAndCombine(opt.Rules)
if err != nil {
return err
}
r.Rules = rule.Compile(rules, opt.FilterRule)
} else if opt.FilterRule != "" {
// if filter rule is not empty, set rules to ":", force to open filter mode
r.Rules = rule.Compile(":", opt.FilterRule)
} else {
r.Rules = new(rule.Program)
}
if len(r.Rules.Expressions) > 0 {
r.Total = len(r.Wordlist) * len(r.Rules.Expressions)
} else {
r.Total = len(r.Wordlist)
}
if len(opt.AppendRule) != 0 {
content, err := pkg.LoadRuleAndCombine(opt.AppendRule)
if err != nil {
return err
}
r.AppendRules = rule.Compile(string(content), "")
}
if len(opt.AppendFile) != 0 {
var lines []string
for _, f := range opt.AppendFile {
dict, err := pkg.LoadFileToSlice(f)
if err != nil {
return err
}
lines = append(lines, dict...)
}
r.AppendWords = append(r.AppendWords, lines...)
}
// 类似dirsearch中的
if opt.Extensions != "" {
r.AppendFunction(pkg.ParseEXTPlaceholderFunc(strings.Split(opt.Extensions, ",")))
} else {
r.AppendFunction(func(s string) []string {
if strings.Contains(s, pkg.EXTChar) {
return nil
}
return []string{s}
})
}
if opt.Uppercase {
r.AppendFunction(pkg.WrapWordsFunc(strings.ToUpper))
}
if opt.Lowercase {
r.AppendFunction(pkg.WrapWordsFunc(strings.ToLower))
}
if opt.RemoveExtensions != "" {
rexts := strings.Split(opt.ExcludeExtensions, ",")
r.AppendFunction(func(s string) []string {
if ext := pkg.ParseExtension(s); iutils.StringsContains(rexts, ext) {
return []string{strings.TrimSuffix(s, "."+ext)}
}
return []string{s}
})
}
if opt.ExcludeExtensions != "" {
exexts := strings.Split(opt.ExcludeExtensions, ",")
r.AppendFunction(func(s string) []string {
if ext := pkg.ParseExtension(s); iutils.StringsContains(exexts, ext) {
return nil
}
return []string{s}
})
}
if len(opt.Replaces) > 0 {
r.AppendFunction(func(s string) []string {
for k, v := range opt.Replaces {
s = strings.Replace(s, k, v, -1)
}
return []string{s}
})
}
if len(opt.Skips) > 0 {
r.AppendFunction(func(s string) []string {
for _, skip := range opt.Skips {
if strings.Contains(s, skip) {
return nil
}
}
return []string{s}
})
}
return nil
}
func (opt *Option) BuildTasks(r *Runner) (*TaskGenerator, error) {
// prepare task`
var err error
gen := NewTaskGenerator(opt.PortRange)
if opt.ResumeFrom != "" {
stats, err := pkg.ReadStatistors(opt.ResumeFrom)
if err != nil {
logs.Log.Error(err.Error())
}
r.Count = len(stats)
gen.Name = "resume " + opt.ResumeFrom
go func() {
for _, stat := range stats {
gen.In <- &Task{baseUrl: stat.BaseUrl, origin: NewOrigin(stat)}
}
close(gen.In)
}()
} else {
var file *os.File
// 根据不同的输入类型生成任务
if len(opt.URL) == 1 {
gen.Name = opt.URL[0]
go func() {
gen.Run(opt.URL[0])
close(gen.In)
}()
r.Count = 1
} else if len(opt.URL) > 1 {
go func() {
for _, u := range opt.URL {
gen.Run(u)
}
close(gen.In)
}()
gen.Name = "cmd"
r.Count = len(opt.URL)
} else if opt.RawFile != "" {
raw, err := os.Open(opt.RawFile)
if err != nil {
return nil, err
}
req, err := http.ReadRequest(bufio.NewReader(raw))
if err != nil {
return nil, err
}
go func() {
gen.Run(fmt.Sprintf("http://%s%s", req.Host, req.URL.String()))
close(gen.In)
}()
r.Method = req.Method
for k, _ := range req.Header {
r.Headers[k] = req.Header.Get(k)
}
r.Count = 1
} else if len(opt.CIDRs) != 0 {
cidrs := utils.ParseCIDRs(opt.CIDRs)
if len(gen.ports) == 0 {
gen.ports = []string{"80", "443"}
}
gen.Name = "cidr"
r.Count = cidrs.Count()
go func() {
for _, cidr := range cidrs {
if cidr == nil {
logs.Log.Error("cidr format error: " + cidr.String())
}
for ip := range cidr.Range() {
gen.Run(ip.String())
}
}
close(gen.In)
}()
} else if opt.URLFile != "" {
file, err = os.Open(opt.URLFile)
if err != nil {
return nil, err
}
gen.Name = filepath.Base(opt.URLFile)
} else if files.HasStdin() {
file = os.Stdin
gen.Name = "stdin"
}
if file != nil {
content, err := ioutil.ReadAll(file)
if err != nil {
return nil, err
}
urls := strings.Split(strings.TrimSpace(string(content)), "\n")
for _, u := range urls {
u = strings.TrimSpace(u)
if _, err := url.Parse(u); err == nil {
r.Count++
} else if ip := utils.ParseIP(u); ip != nil {
r.Count++
} else if cidr := utils.ParseCIDR(u); cidr != nil {
r.Count += cidr.Count()
}
}
go func() {
for _, u := range urls {
u = strings.TrimSpace(u)
if _, err := url.Parse(u); err == nil {
gen.Run(u)
} else if ip := utils.ParseIP(u); ip != nil {
gen.Run(u)
} else if cidr := utils.ParseCIDR(u); cidr != nil {
for ip := range cidr.Range() {
gen.Run(ip.String())
}
}
}
close(gen.In)
}()
}
}
if len(gen.ports) > 0 {
r.Count = r.Count * len(gen.ports)
}
return gen, nil
}

View File

@ -1,902 +0,0 @@
package pool
import (
"context"
"errors"
"fmt"
"github.com/chainreactors/logs"
"github.com/chainreactors/parsers"
"github.com/chainreactors/spray/core/baseline"
"github.com/chainreactors/spray/core/ihttp"
"github.com/chainreactors/spray/pkg"
"github.com/chainreactors/utils/iutils"
"github.com/chainreactors/words/rule"
"github.com/panjf2000/ants/v2"
"github.com/valyala/fasthttp"
"golang.org/x/time/rate"
"math/rand"
"net/url"
"path"
"strings"
"sync"
"sync/atomic"
"time"
)
var (
EnableAllFuzzy = false
EnableAllUnique = false
//AllowHostModSource = []parsers.SpraySource{parsers.WordSource, parsers.CheckSource, parsers.InitIndexSource, parsers.InitRandomSource}
)
func NewBrutePool(ctx context.Context, config *Config) (*BrutePool, error) {
var u *url.URL
var err error
if u, err = url.Parse(config.BaseURL); err != nil {
return nil, err
}
pctx, cancel := context.WithCancel(ctx)
pool := &BrutePool{
Baselines: NewBaselines(),
BasePool: &BasePool{
Config: config,
ctx: pctx,
Cancel: cancel,
client: ihttp.NewClient(&ihttp.ClientConfig{
Thread: config.Thread,
Type: config.ClientType,
Timeout: config.Timeout,
ProxyClient: config.ProxyClient,
}),
additionCh: make(chan *Unit, config.Thread),
closeCh: make(chan struct{}),
processCh: make(chan *baseline.Baseline, config.Thread),
wg: &sync.WaitGroup{},
},
base: u.Scheme + "://" + u.Host,
isDir: strings.HasSuffix(u.Path, "/"),
url: u,
scopeurls: make(map[string]struct{}),
uniques: make(map[uint16]struct{}),
checkCh: make(chan struct{}, config.Thread),
initwg: sync.WaitGroup{},
limiter: rate.NewLimiter(rate.Limit(config.RateLimit), 1),
failedCount: 1,
}
rand.Seed(time.Now().UnixNano())
// 格式化dir, 保证至少有一个"/"
if strings.HasSuffix(config.BaseURL, "/") {
pool.dir = pool.url.Path
} else if pool.url.Path == "" {
pool.dir = "/"
} else {
pool.dir = pkg.Dir(pool.url.Path)
}
pool.reqPool, _ = ants.NewPoolWithFunc(config.Thread, pool.Invoke)
pool.scopePool, _ = ants.NewPoolWithFunc(config.Thread, pool.NoScopeInvoke)
// 挂起一个异步的处理结果线程, 不干扰主线程的请求并发
go pool.Handler()
return pool, nil
}
type BrutePool struct {
*Baselines
*BasePool
base string // url的根目录, 在爬虫或者redirect时, 会需要用到根目录进行拼接
isDir bool
url *url.URL
reqPool *ants.PoolWithFunc
scopePool *ants.PoolWithFunc
checkCh chan struct{} // 独立的check管道 防止与redirect/crawl冲突
closed bool
wordOffset int
failedCount int32
IsFailed bool
urls sync.Map
scopeurls map[string]struct{}
uniques map[uint16]struct{}
analyzeDone bool
limiter *rate.Limiter
locker sync.Mutex
scopeLocker sync.Mutex
initwg sync.WaitGroup // 初始化用, 之后改成锁
}
func (pool *BrutePool) Init() error {
pool.initwg.Add(2)
if pool.Index != "/" {
logs.Log.Logf(pkg.LogVerbose, "custom index url: %s", pkg.BaseURL(pool.url)+pkg.FormatURL(pkg.BaseURL(pool.url), pool.Index))
pool.reqPool.Invoke(&Unit{path: pool.Index, source: parsers.InitIndexSource})
//pool.urls[dir(pool.Index)] = struct{}{}
} else {
pool.reqPool.Invoke(&Unit{path: pool.url.Path, source: parsers.InitIndexSource})
//pool.urls[dir(pool.url.Path)] = struct{}{}
}
if pool.Random != "" {
logs.Log.Logf(pkg.LogVerbose, "custom random url: %s", pkg.BaseURL(pool.url)+pkg.FormatURL(pkg.BaseURL(pool.url), pool.Random))
if pool.Mod == PathSpray {
pool.reqPool.Invoke(&Unit{path: pool.Random, source: parsers.InitRandomSource})
} else {
pool.reqPool.Invoke(&Unit{host: pool.Random, source: parsers.InitRandomSource})
}
} else {
if pool.Mod == PathSpray {
pool.reqPool.Invoke(&Unit{path: pool.safePath(pkg.RandPath()), source: parsers.InitRandomSource})
} else {
pool.reqPool.Invoke(&Unit{host: pkg.RandHost(), source: parsers.InitRandomSource})
}
}
pool.initwg.Wait()
if pool.index.ErrString != "" {
logs.Log.Error(pool.index.String())
return fmt.Errorf(pool.index.ErrString)
}
if pool.index.Chunked && pool.ClientType == ihttp.FAST {
logs.Log.Warn("chunk encoding! buf current client FASTHTTP not support chunk decode")
}
logs.Log.Logf(pkg.LogVerbose, "[baseline.index] "+pool.index.Format([]string{"status", "length", "spend", "title", "frame", "redirect"}))
// 检测基本访问能力
if pool.random.ErrString != "" {
logs.Log.Error(pool.index.String())
return fmt.Errorf(pool.index.ErrString)
}
logs.Log.Logf(pkg.LogVerbose, "[baseline.random] "+pool.random.Format([]string{"status", "length", "spend", "title", "frame", "redirect"}))
// 某些网站http会重定向到https, 如果发现随机目录出现这种情况, 则自定将baseurl升级为https
if pool.url.Scheme == "http" {
if pool.index.RedirectURL != "" {
if err := pool.Upgrade(pool.index); err != nil {
return err
}
} else if pool.random.RedirectURL != "" {
if err := pool.Upgrade(pool.random); err != nil {
return err
}
}
}
return nil
}
func (pool *BrutePool) Run(offset, limit int) {
pool.Worder.Run()
if pool.Active {
pool.wg.Add(1)
go pool.doActive()
}
if pool.Bak {
pool.wg.Add(1)
go pool.doBak()
}
if pool.Fuzzuli {
pool.wg.Add(1)
go pool.doFuzzuli()
}
if pool.Common {
pool.wg.Add(1)
go pool.doCommonFile()
}
var done bool
// 挂起一个监控goroutine, 每100ms判断一次done, 如果已经done, 则关闭closeCh, 然后通过Loop中的select case closeCh去break, 实现退出
go func() {
for {
if done {
pool.wg.Wait()
close(pool.closeCh)
return
}
time.Sleep(100 * time.Millisecond)
}
}()
Loop:
for {
select {
case w, ok := <-pool.Worder.Output:
if !ok {
done = true
continue
}
pool.Statistor.End++
pool.wordOffset++
if pool.wordOffset < offset {
continue
}
if pool.Statistor.End > limit {
done = true
continue
}
if w == "" {
pool.Statistor.Skipped++
pool.Bar.Done()
continue
}
pool.wg.Add(1)
if pool.Mod == HostSpray {
pool.reqPool.Invoke(&Unit{host: w, source: parsers.WordSource, number: pool.wordOffset})
} else {
// 原样的目录拼接, 输入了几个"/"就是几个, 适配/有语义的中间件
pool.reqPool.Invoke(&Unit{path: pool.safePath(w), source: parsers.WordSource, number: pool.wordOffset})
}
case <-pool.checkCh:
pool.Statistor.CheckNumber++
if pool.Mod == HostSpray {
pool.reqPool.Invoke(&Unit{host: pkg.RandHost(), source: parsers.CheckSource, number: pool.wordOffset})
} else if pool.Mod == PathSpray {
pool.reqPool.Invoke(&Unit{path: pool.safePath(pkg.RandPath()), source: parsers.CheckSource, number: pool.wordOffset})
}
case unit, ok := <-pool.additionCh:
if !ok || pool.closed {
continue
}
if _, ok := pool.urls.Load(unit.path); ok {
logs.Log.Debugf("[%s] duplicate path: %s, skipped", unit.source.Name(), pool.base+unit.path)
pool.wg.Done()
} else {
pool.urls.Store(unit.path, nil)
unit.path = pool.safePath(unit.path)
unit.number = pool.wordOffset
pool.reqPool.Invoke(unit)
}
case <-pool.closeCh:
break Loop
case <-pool.ctx.Done():
break Loop
}
}
pool.closed = true
pool.Close()
}
func (pool *BrutePool) Invoke(v interface{}) {
if pool.RateLimit != 0 {
pool.limiter.Wait(pool.ctx)
}
atomic.AddInt32(&pool.Statistor.ReqTotal, 1)
unit := v.(*Unit)
var req *ihttp.Request
var err error
req, err = ihttp.BuildRequest(pool.ctx, pool.ClientType, pool.base, unit.path, unit.host, pool.Method)
if err != nil {
logs.Log.Error(err.Error())
return
}
req.SetHeaders(pool.Headers, pool.RandomUserAgent)
start := time.Now()
resp, reqerr := pool.client.Do(req)
if pool.ClientType == ihttp.FAST {
defer fasthttp.ReleaseResponse(resp.FastResponse)
defer fasthttp.ReleaseRequest(req.FastRequest)
}
// compare与各种错误处理
var bl *baseline.Baseline
if reqerr != nil && !errors.Is(reqerr, fasthttp.ErrBodyTooLarge) {
atomic.AddInt32(&pool.failedCount, 1)
atomic.AddInt32(&pool.Statistor.FailedNumber, 1)
bl = &baseline.Baseline{
SprayResult: &parsers.SprayResult{
UrlString: pool.base + unit.path,
ErrString: reqerr.Error(),
Reason: pkg.ErrRequestFailed.Error(),
},
}
pool.FailedBaselines = append(pool.FailedBaselines, bl)
// 自动重放失败请求
pool.doRetry(bl)
} else { // 特定场景优化
if unit.source <= 3 || unit.source == parsers.CrawlSource || unit.source == parsers.CommonFileSource {
// 一些高优先级的source, 将跳过PreCompare
bl = baseline.NewBaseline(req.URI(), req.Host(), resp)
} else if pool.MatchExpr != nil {
// 如果自定义了match函数, 则所有数据送入tempch中
bl = baseline.NewBaseline(req.URI(), req.Host(), resp)
} else if err = pool.PreCompare(resp); err == nil {
// 通过预对比跳过一些无用数据, 减少性能消耗
bl = baseline.NewBaseline(req.URI(), req.Host(), resp)
} else {
bl = baseline.NewInvalidBaseline(req.URI(), req.Host(), resp, err.Error())
}
}
// 手动处理重定向
if bl.IsValid && unit.source != parsers.CheckSource && bl.RedirectURL != "" {
bl.SameRedirectDomain = pool.checkHost(bl.RedirectURL)
pool.doRedirect(bl, unit.depth)
}
if !ihttp.CheckBodySize(int64(bl.BodyLength)) {
bl.ExceedLength = true
}
unit.Update(bl)
bl.Spended = time.Since(start).Milliseconds()
switch unit.source {
case parsers.InitRandomSource:
defer pool.initwg.Done()
pool.locker.Lock()
pool.random = bl
pool.locker.Unlock()
if !bl.IsValid {
return
}
bl.Collect()
pool.addFuzzyBaseline(bl)
case parsers.InitIndexSource:
defer pool.initwg.Done()
pool.locker.Lock()
pool.index = bl
pool.locker.Unlock()
if !bl.IsValid {
return
}
bl.Collect()
pool.doCrawl(bl)
pool.doAppend(bl)
pool.putToOutput(bl)
case parsers.CheckSource:
if bl.ErrString != "" {
logs.Log.Warnf("[check.error] %s maybe ip had banned, break (%d/%d), error: %s", pool.BaseURL, pool.failedCount, pool.BreakThreshold, bl.ErrString)
} else if i := pool.random.Compare(bl); i < 1 {
if i == 0 {
if pool.Fuzzy {
logs.Log.Debug("[check.fuzzy] maybe trigger risk control, " + bl.String())
}
} else {
atomic.AddInt32(&pool.failedCount, 1) //
logs.Log.Debug("[check.failed] maybe trigger risk control, " + bl.String())
pool.FailedBaselines = append(pool.FailedBaselines, bl)
}
} else {
pool.resetFailed() // 如果后续访问正常, 重置错误次数
logs.Log.Debug("[check.pass] " + bl.String())
}
case parsers.WordSource:
// 异步进行性能消耗较大的深度对比
pool.processCh <- bl
if int(pool.Statistor.ReqTotal)%pool.CheckPeriod == 0 {
// 间歇插入check waf的探针
pool.doCheck()
} else if pool.failedCount%pool.ErrPeriod == 0 {
// 发生错误时插入探针, 如果超过阈值提前退出
atomic.AddInt32(&pool.failedCount, 1)
pool.doCheck()
}
pool.Bar.Done()
case parsers.RedirectSource:
bl.FrontURL = unit.frontUrl
pool.processCh <- bl
default:
pool.processCh <- bl
}
}
func (pool *BrutePool) NoScopeInvoke(v interface{}) {
defer pool.wg.Done()
unit := v.(*Unit)
req, err := ihttp.BuildRequest(pool.ctx, pool.ClientType, unit.path, "", "", "GET")
if err != nil {
logs.Log.Error(err.Error())
return
}
req.SetHeaders(pool.Headers, pool.RandomUserAgent)
resp, reqerr := pool.client.Do(req)
if pool.ClientType == ihttp.FAST {
defer fasthttp.ReleaseResponse(resp.FastResponse)
defer fasthttp.ReleaseRequest(req.FastRequest)
}
if reqerr != nil {
logs.Log.Error(reqerr.Error())
return
}
if resp.StatusCode() == 200 {
bl := baseline.NewBaseline(req.URI(), req.Host(), resp)
bl.Source = unit.source
bl.ReqDepth = unit.depth
bl.Collect()
bl.CollectURL()
pool.wg.Add(1)
pool.doScopeCrawl(bl)
pool.putToOutput(bl)
}
}
func (pool *BrutePool) Handler() {
for bl := range pool.processCh {
if bl.IsValid {
pool.addFuzzyBaseline(bl)
}
if _, ok := pool.Statistor.Counts[bl.Status]; ok {
pool.Statistor.Counts[bl.Status]++
} else {
pool.Statistor.Counts[bl.Status] = 1
}
if _, ok := pool.Statistor.Sources[bl.Source]; ok {
pool.Statistor.Sources[bl.Source]++
} else {
pool.Statistor.Sources[bl.Source] = 1
}
var params map[string]interface{}
if pool.MatchExpr != nil || pool.FilterExpr != nil || pool.RecuExpr != nil {
params = map[string]interface{}{
"index": pool.index,
"random": pool.random,
"current": bl,
}
//for _, ok := range FuzzyStatus {
// if bl, ok := pool.baselines[ok]; ok {
// params["bl"+strconv.Itoa(ok)] = bl
// } else {
// params["bl"+strconv.Itoa(ok)] = nilBaseline
// }
//}
}
var ok bool
if pool.MatchExpr != nil {
if pkg.CompareWithExpr(pool.MatchExpr, params) {
ok = true
}
} else {
ok = pool.BaseCompare(bl)
}
if ok {
// unique判断
if EnableAllUnique || iutils.IntsContains(pkg.UniqueStatus, bl.Status) {
if _, ok := pool.uniques[bl.Unique]; ok {
bl.IsValid = false
bl.IsFuzzy = true
bl.Reason = pkg.ErrFuzzyNotUnique.Error()
} else {
pool.uniques[bl.Unique] = struct{}{}
}
}
// 对通过所有对比的有效数据进行再次filter
if bl.IsValid && pool.FilterExpr != nil && pkg.CompareWithExpr(pool.FilterExpr, params) {
pool.Statistor.FilteredNumber++
bl.Reason = pkg.ErrCustomFilter.Error()
bl.IsValid = false
}
} else {
bl.IsValid = false
}
if bl.IsValid || (bl.IsFuzzy && pool.Fuzzy) {
pool.doCrawl(bl)
pool.doAppend(bl)
}
// 如果要进行递归判断, 要满足 bl有效, mod为path-spray, 当前深度小于最大递归深度
if bl.IsValid {
pool.Statistor.FoundNumber++
if bl.RecuDepth < pool.MaxRecursionDepth {
if pkg.CompareWithExpr(pool.RecuExpr, params) {
bl.Recu = true
}
}
}
if !pool.closed {
// 如果任务被取消, 所有还没处理的请求结果都会被丢弃
pool.putToOutput(bl)
}
pool.wg.Done()
}
pool.analyzeDone = true
}
func (pool *BrutePool) checkRedirect(redirectURL string) bool {
if pool.random.RedirectURL == "" {
// 如果random的redirectURL为空, 忽略
return true
}
if redirectURL == pool.random.RedirectURL {
// 相同的RedirectURL将被认为是无效数据
return false
} else {
// path为3xx, 且与baseline中的RedirectURL不同时, 为有效数据
return true
}
}
func (pool *BrutePool) Upgrade(bl *baseline.Baseline) error {
rurl, err := url.Parse(bl.RedirectURL)
if err == nil && rurl.Hostname() == bl.Url.Hostname() && bl.Url.Scheme == "http" && rurl.Scheme == "https" {
logs.Log.Infof("baseurl %s upgrade http to https, reinit", pool.BaseURL)
pool.base = strings.Replace(pool.BaseURL, "http", "https", 1)
pool.url.Scheme = "https"
// 重新初始化
err = pool.Init()
if err != nil {
return err
}
}
return nil
}
func (pool *BrutePool) PreCompare(resp *ihttp.Response) error {
status := resp.StatusCode()
if pkg.StatusContain(pkg.WhiteStatus, status) {
// 如果为白名单状态码则直接返回
return nil
}
//if pool.random.Status != 200 && pool.random.Status == status {
// return pkg.ErrSameStatus
//}
if pkg.StatusContain(pkg.BlackStatus, status) {
return pkg.ErrBadStatus
}
if pkg.StatusContain(pkg.WAFStatus, status) {
return pkg.ErrWaf
}
if !pool.checkRedirect(resp.GetHeader("Location")) {
return pkg.ErrRedirect
}
return nil
}
// same host return true
// diff host return false
func (pool *BrutePool) checkHost(u string) bool {
if v, err := url.Parse(u); err == nil {
if v.Host == "" {
return true
}
if v.Host == pool.url.Host {
return true
} else {
return false
}
}
return true
}
func (pool *BrutePool) BaseCompare(bl *baseline.Baseline) bool {
if !bl.IsValid {
return false
}
var status = -1
// 30x状态码的特殊处理
if bl.RedirectURL != "" {
if bl.SameRedirectDomain && strings.HasSuffix(bl.RedirectURL, bl.Url.Path+"/") {
bl.Reason = pkg.ErrFuzzyRedirect.Error()
return false
}
}
// 使用与baseline相同状态码, 需要在fuzzystatus中提前配置
base, ok := pool.baselines[bl.Status] // 挑选对应状态码的baseline进行compare
if bl.IsBaseline {
ok = false
}
if !ok {
if pool.random.Status == bl.Status {
// 当other的状态码与base相同时, 会使用base
ok = true
base = pool.random
} else if pool.index.Status == bl.Status {
// 当other的状态码与index相同时, 会使用index
ok = true
base = pool.index
}
}
if ok {
if status = base.Compare(bl); status == 1 {
bl.Reason = pkg.ErrCompareFailed.Error()
return false
}
}
bl.Hashes = parsers.NewHashes(bl.Raw)
//if !pool.IgnoreWaf {
// // 部分情况下waf的特征可能是全局, 指定了--ignore-waf则不会进行waf的指纹检测
// for _, f := range bl.Frameworks {
// if f.HasTag("waf") {
// pool.Statistor.WafedNumber++
// bl.Reason = ErrWaf.Error()
// return false
// }
// }
//}
if ok && status == 0 && base.FuzzyCompare(bl) {
pool.Statistor.FuzzyNumber++
bl.Reason = pkg.ErrFuzzyCompareFailed.Error()
pool.putToFuzzy(bl)
return false
}
return true
}
func (pool *BrutePool) addFuzzyBaseline(bl *baseline.Baseline) {
if _, ok := pool.baselines[bl.Status]; !ok && (EnableAllFuzzy || iutils.IntsContains(pkg.FuzzyStatus, bl.Status)) {
bl.IsBaseline = true
bl.Collect()
pool.doCrawl(bl) // 非有效页面也可能存在一些特殊的url可以用来爬取
pool.baselines[bl.Status] = bl
logs.Log.Logf(pkg.LogVerbose, "[baseline.%dinit] %s", bl.Status, bl.Format([]string{"status", "length", "spend", "title", "frame", "redirect"}))
}
}
func (pool *BrutePool) fallback() {
logs.Log.Errorf("%s ,failed request exceeds the threshold , task will exit. Breakpoint %d", pool.BaseURL, pool.wordOffset)
for i, bl := range pool.FailedBaselines {
if i > 5 {
break
}
logs.Log.Errorf("[failed.%d] %s", i, bl.String())
}
}
func (pool *BrutePool) Close() {
for pool.analyzeDone {
// 等待缓存的待处理任务完成
time.Sleep(time.Duration(100) * time.Millisecond)
}
close(pool.additionCh) // 关闭addition管道
//close(pool.checkCh) // 关闭check管道
pool.Statistor.EndTime = time.Now().Unix()
pool.reqPool.Release()
pool.scopePool.Release()
}
func (pool *BrutePool) safePath(u string) string {
// 自动生成的目录将采用safepath的方式拼接到相对目录中, 避免出现//的情况. 例如init, check, common
if pool.isDir {
return pkg.SafePath(pool.dir, u)
} else {
return pkg.SafePath(pool.url.Path+"/", u)
}
}
func (pool *BrutePool) resetFailed() {
pool.failedCount = 1
pool.FailedBaselines = nil
}
func (pool *BrutePool) doCheck() {
if pool.failedCount > pool.BreakThreshold {
// 当报错次数超过上限是, 结束任务
if pool.isFallback.Load() {
return
}
pool.isFallback.Store(true)
pool.fallback()
pool.IsFailed = true
pool.Cancel()
return
}
if pool.Mod == HostSpray {
pool.checkCh <- struct{}{}
} else if pool.Mod == PathSpray {
pool.checkCh <- struct{}{}
}
}
func (pool *BrutePool) doRedirect(bl *baseline.Baseline, depth int) {
if depth >= pool.MaxRedirect {
return
}
//if !bl.SameRedirectDomain {
// return // 不同域名的重定向不处理
//}
reURL := pkg.FormatURL(bl.Url.Path, bl.RedirectURL)
pool.wg.Add(1)
go func() {
defer pool.wg.Done()
pool.addAddition(&Unit{
path: reURL,
parent: bl.Number,
host: bl.Host,
source: parsers.RedirectSource,
from: bl.Source,
frontUrl: bl.UrlString,
depth: depth + 1,
})
}()
}
func (pool *BrutePool) doCrawl(bl *baseline.Baseline) {
if !pool.Crawl || bl.ReqDepth >= pool.MaxCrawlDepth {
return
}
bl.CollectURL()
if bl.URLs == nil {
return
}
pool.wg.Add(2)
pool.doScopeCrawl(bl)
go func() {
defer pool.wg.Done()
for _, u := range bl.URLs {
if u = pkg.FormatURL(bl.Url.Path, u); u == "" {
continue
}
pool.addAddition(&Unit{
path: u,
parent: bl.Number,
host: bl.Host,
source: parsers.CrawlSource,
from: bl.Source,
depth: bl.ReqDepth + 1,
})
}
}()
}
func (pool *BrutePool) doScopeCrawl(bl *baseline.Baseline) {
if bl.ReqDepth >= pool.MaxCrawlDepth {
pool.wg.Done()
return
}
go func() {
defer pool.wg.Done()
for _, u := range bl.URLs {
if strings.HasPrefix(u, "http") {
if v, _ := url.Parse(u); v == nil || !pkg.MatchWithGlobs(v.Host, pool.Scope) {
continue
}
pool.scopeLocker.Lock()
if _, ok := pool.scopeurls[u]; !ok {
pool.urls.Store(u, nil)
pool.wg.Add(1)
pool.scopePool.Invoke(&Unit{
path: u,
parent: bl.Number,
source: parsers.CrawlSource,
from: bl.Source,
depth: bl.ReqDepth + 1,
})
}
pool.scopeLocker.Unlock()
}
}
}()
}
func (pool *BrutePool) doFuzzuli() {
defer pool.wg.Done()
if pool.Mod == HostSpray {
return
}
for w := range NewBruteDSL(pool.Config, "{?0}.{?@bak_ext}", [][]string{pkg.BakGenerator(pool.url.Host)}).Output {
pool.addAddition(&Unit{
path: pool.dir + w,
source: parsers.BakSource,
})
}
}
func (pool *BrutePool) doBak() {
defer pool.wg.Done()
if pool.Mod == HostSpray {
return
}
for w := range NewBruteDSL(pool.Config, "{?@bak_name}.{?@bak_ext}", nil).Output {
pool.addAddition(&Unit{
path: pool.dir + w,
source: parsers.BakSource,
})
}
}
func (pool *BrutePool) doAppend(bl *baseline.Baseline) {
pool.wg.Add(2)
pool.doAppendWords(bl)
pool.doAppendRule(bl)
}
func (pool *BrutePool) doAppendRule(bl *baseline.Baseline) {
if pool.AppendRule == nil || bl.Source == parsers.AppendRuleSource || bl.ReqDepth >= pool.MaxAppendDepth {
pool.wg.Done()
return
}
go func() {
defer pool.wg.Done()
for u := range rule.RunAsStream(pool.AppendRule.Expressions, path.Base(bl.Path)) {
pool.addAddition(&Unit{
path: pkg.Dir(bl.Url.Path) + u,
parent: bl.Number,
host: bl.Host,
source: parsers.AppendRuleSource,
from: bl.Source,
depth: bl.ReqDepth + 1,
})
}
}()
}
func (pool *BrutePool) doAppendWords(bl *baseline.Baseline) {
if pool.AppendWords == nil || bl.Source == parsers.AppendSource || bl.Source == parsers.RuleSource || bl.ReqDepth >= pool.MaxAppendDepth {
// 防止自身递归
pool.wg.Done()
return
}
go func() {
defer pool.wg.Done()
for u := range NewBruteWords(pool.Config, pool.AppendWords).Output {
pool.addAddition(&Unit{
path: pkg.SafePath(bl.Path, u),
parent: bl.Number,
host: bl.Host,
source: parsers.AppendSource,
from: bl.Source,
depth: bl.RecuDepth + 1,
})
}
}()
}
func (pool *BrutePool) doActive() {
defer pool.wg.Done()
if pool.Mod == HostSpray {
return
}
for _, u := range pkg.ActivePath {
pool.addAddition(&Unit{
path: pool.dir + u[1:],
source: parsers.FingerSource,
})
}
}
func (pool *BrutePool) doCommonFile() {
defer pool.wg.Done()
if pool.Mod == HostSpray {
return
}
for u := range NewBruteWords(pool.Config, append(pkg.Dicts["common"], pkg.Dicts["log"]...)).Output {
pool.addAddition(&Unit{
path: pool.dir + u,
source: parsers.CommonFileSource,
})
}
}

View File

@ -1,233 +0,0 @@
package pool
import (
"context"
"github.com/chainreactors/logs"
"github.com/chainreactors/parsers"
"github.com/chainreactors/spray/core/baseline"
"github.com/chainreactors/spray/core/ihttp"
"github.com/chainreactors/spray/pkg"
"github.com/panjf2000/ants/v2"
"net/url"
"strings"
"sync"
"time"
)
// 类似httpx的无状态, 无scope, 无并发池的检测模式
func NewCheckPool(ctx context.Context, config *Config) (*CheckPool, error) {
pctx, cancel := context.WithCancel(ctx)
config.ClientType = ihttp.STANDARD
pool := &CheckPool{
BasePool: &BasePool{
Config: config,
Statistor: pkg.NewStatistor(""),
ctx: pctx,
Cancel: cancel,
client: ihttp.NewClient(&ihttp.ClientConfig{
Thread: config.Thread,
Type: config.ClientType,
Timeout: config.Timeout,
ProxyClient: config.ProxyClient,
}),
wg: &sync.WaitGroup{},
additionCh: make(chan *Unit, 1024),
closeCh: make(chan struct{}),
processCh: make(chan *baseline.Baseline, config.Thread),
},
}
pool.Headers.Set("Connection", "close")
p, _ := ants.NewPoolWithFunc(config.Thread, pool.Invoke)
pool.Pool = p
go pool.Handler()
return pool, nil
}
type CheckPool struct {
*BasePool
Pool *ants.PoolWithFunc
}
func (pool *CheckPool) Run(ctx context.Context, offset, limit int) {
pool.Worder.Run()
var done bool
// 挂起一个监控goroutine, 每100ms判断一次done, 如果已经done, 则关闭closeCh, 然后通过Loop中的select case closeCh去break, 实现退出
go func() {
for {
if done {
pool.wg.Wait()
close(pool.closeCh)
return
}
time.Sleep(100 * time.Millisecond)
}
}()
Loop:
for {
select {
case u, ok := <-pool.Worder.Output:
if !ok {
done = true
continue
}
if pool.reqCount < offset {
pool.reqCount++
continue
}
if pool.reqCount > limit {
continue
}
pool.wg.Add(1)
_ = pool.Pool.Invoke(newUnit(u, parsers.CheckSource))
case u, ok := <-pool.additionCh:
if !ok {
continue
}
_ = pool.Pool.Invoke(u)
case <-pool.closeCh:
break Loop
case <-ctx.Done():
break Loop
case <-pool.ctx.Done():
break Loop
}
}
pool.Close()
}
func (pool *CheckPool) Close() {
pool.Bar.Close()
pool.Pool.Release()
}
func (pool *CheckPool) Invoke(v interface{}) {
defer func() {
pool.reqCount++
pool.wg.Done()
}()
unit := v.(*Unit)
req, err := ihttp.BuildRequest(pool.ctx, pool.ClientType, unit.path, "", "", "GET")
if err != nil {
logs.Log.Debug(err.Error())
bl := &baseline.Baseline{
SprayResult: &parsers.SprayResult{
UrlString: unit.path,
IsValid: false,
ErrString: err.Error(),
Reason: pkg.ErrUrlError.Error(),
ReqDepth: unit.depth,
},
}
pool.processCh <- bl
return
}
req.SetHeaders(pool.Headers, pool.RandomUserAgent)
start := time.Now()
var bl *baseline.Baseline
resp, reqerr := pool.client.Do(req)
if reqerr != nil {
pool.failedCount++
bl = &baseline.Baseline{
SprayResult: &parsers.SprayResult{
UrlString: unit.path,
IsValid: false,
ErrString: reqerr.Error(),
Reason: pkg.ErrRequestFailed.Error(),
ReqDepth: unit.depth,
},
}
logs.Log.Debugf("%s, %s", unit.path, reqerr.Error())
pool.doUpgrade(bl)
} else {
bl = baseline.NewBaseline(req.URI(), req.Host(), resp)
bl.ReqDepth = unit.depth
bl.Collect()
if bl.Status == 400 {
pool.doUpgrade(bl)
}
}
bl.ReqDepth = unit.depth
bl.Source = unit.source
bl.Spended = time.Since(start).Milliseconds()
pool.processCh <- bl
}
func (pool *CheckPool) Handler() {
for bl := range pool.processCh {
if bl.IsValid {
if bl.RedirectURL != "" {
pool.doRedirect(bl, bl.ReqDepth)
pool.putToOutput(bl)
} else {
params := map[string]interface{}{
"current": bl,
}
if pool.MatchExpr != nil && pkg.CompareWithExpr(pool.MatchExpr, params) {
bl.IsValid = true
}
}
}
if bl.Source == parsers.CheckSource {
pool.Bar.Done()
}
pool.putToOutput(bl)
}
}
func (pool *CheckPool) doRedirect(bl *baseline.Baseline, depth int) {
if depth >= pool.MaxRedirect {
return
}
var reURL string
if strings.HasPrefix(bl.RedirectURL, "http") {
_, err := url.Parse(bl.RedirectURL)
if err != nil {
return
}
reURL = bl.RedirectURL
} else {
reURL = pkg.BaseURL(bl.Url) + pkg.FormatURL(pkg.BaseURL(bl.Url), bl.RedirectURL)
}
pool.wg.Add(1)
go func() {
pool.additionCh <- &Unit{
path: reURL,
parent: bl.Number,
source: parsers.RedirectSource,
frontUrl: bl.UrlString,
depth: depth + 1,
from: bl.Source,
}
}()
}
// tcp与400进行协议转换
func (pool *CheckPool) doUpgrade(bl *baseline.Baseline) {
if bl.ReqDepth >= 1 {
return
}
pool.wg.Add(1)
var reurl string
if strings.HasPrefix(bl.UrlString, "https") {
reurl = strings.Replace(bl.UrlString, "https", "http", 1)
} else {
reurl = strings.Replace(bl.UrlString, "http", "https", 1)
}
go func() {
pool.additionCh <- &Unit{
path: reurl,
parent: bl.Number,
source: parsers.UpgradeSource,
depth: bl.ReqDepth + 1,
from: bl.Source,
}
}()
}

View File

@ -1,72 +0,0 @@
package pool
import (
"github.com/chainreactors/logs"
"github.com/chainreactors/proxyclient"
"github.com/chainreactors/spray/core/baseline"
"github.com/chainreactors/words"
"github.com/chainreactors/words/rule"
"github.com/expr-lang/expr/vm"
"net/http"
"sync"
"time"
)
type Config struct {
BaseURL string
ProxyClient proxyclient.Dial
Thread int
Wordlist []string
Timeout time.Duration
ProcessCh chan *baseline.Baseline
OutputCh chan *baseline.Baseline
FuzzyCh chan *baseline.Baseline
Outwg *sync.WaitGroup
RateLimit int
CheckPeriod int
ErrPeriod int32
BreakThreshold int32
Method string
Mod SprayMod
Headers http.Header
ClientType int
MatchExpr *vm.Program
FilterExpr *vm.Program
RecuExpr *vm.Program
AppendRule *rule.Program
Fns []words.WordFunc
AppendWords []string
Fuzzy bool
IgnoreWaf bool
Crawl bool
Scope []string
Active bool
Bak bool
Fuzzuli bool
Common bool
RetryLimit int
RandomUserAgent bool
Random string
Index string
MaxRedirect int
MaxCrawlDepth int
MaxRecursionDepth int
MaxAppendDepth int
}
func NewBruteWords(config *Config, list []string) *words.Worder {
word := words.NewWorderWithList(list)
word.Fns = config.Fns
word.Run()
return word
}
func NewBruteDSL(config *Config, dsl string, params [][]string) *words.Worder {
word, err := words.NewWorderWithDsl(dsl, params, nil)
if err != nil {
logs.Log.Error(err.Error())
}
word.Fns = config.Fns
word.Run()
return word
}

View File

@ -1,72 +0,0 @@
package pool
import (
"context"
"github.com/chainreactors/parsers"
"github.com/chainreactors/spray/core/baseline"
"github.com/chainreactors/spray/core/ihttp"
"github.com/chainreactors/spray/pkg"
"github.com/chainreactors/words"
"sync"
"sync/atomic"
)
type BasePool struct {
*Config
Statistor *pkg.Statistor
Bar *pkg.Bar
Worder *words.Worder
Cancel context.CancelFunc
client *ihttp.Client
ctx context.Context
processCh chan *baseline.Baseline // 待处理的baseline
dir string
reqCount int
failedCount int
additionCh chan *Unit
closeCh chan struct{}
wg *sync.WaitGroup
isFallback atomic.Bool
}
func (pool *BasePool) doRetry(bl *baseline.Baseline) {
if bl.Retry >= pool.RetryLimit {
return
}
pool.wg.Add(1)
go func() {
defer pool.wg.Done()
pool.addAddition(&Unit{
path: bl.Path,
parent: bl.Number,
host: bl.Host,
source: parsers.RetrySource,
from: bl.Source,
retry: bl.Retry + 1,
})
}()
}
func (pool *BasePool) addAddition(u *Unit) {
// 强行屏蔽报错, 防止goroutine泄露
pool.wg.Add(1)
defer func() {
if err := recover(); err != nil {
}
}()
pool.additionCh <- u
}
func (pool *BasePool) putToOutput(bl *baseline.Baseline) {
if bl.IsValid || bl.IsFuzzy {
bl.Collect()
}
pool.Outwg.Add(1)
pool.OutputCh <- bl
}
func (pool *BasePool) putToFuzzy(bl *baseline.Baseline) {
pool.Outwg.Add(1)
bl.IsFuzzy = true
pool.FuzzyCh <- bl
}

View File

@ -1,57 +0,0 @@
package pool
import (
"github.com/chainreactors/parsers"
"github.com/chainreactors/spray/core/baseline"
)
func newUnit(path string, source parsers.SpraySource) *Unit {
return &Unit{path: path, source: source}
}
type Unit struct {
number int
parent int
host string
path string
from parsers.SpraySource
source parsers.SpraySource
retry int
frontUrl string
depth int
}
func (u *Unit) Update(bl *baseline.Baseline) {
bl.Number = u.number
bl.Parent = u.parent
bl.Host = u.host
bl.Path = u.path
bl.Source = u.source
}
func NewBaselines() *Baselines {
return &Baselines{
baselines: map[int]*baseline.Baseline{},
}
}
type Baselines struct {
FailedBaselines []*baseline.Baseline
random *baseline.Baseline
index *baseline.Baseline
baselines map[int]*baseline.Baseline
}
type SprayMod int
const (
PathSpray SprayMod = iota + 1
HostSpray
ParamSpray
CustomSpray
)
var ModMap = map[string]SprayMod{
"path": PathSpray,
"host": HostSpray,
}

View File

@ -1,460 +0,0 @@
package core
import (
"context"
"github.com/chainreactors/files"
"github.com/chainreactors/logs"
"github.com/chainreactors/proxyclient"
"github.com/chainreactors/spray/core/baseline"
"github.com/chainreactors/spray/core/ihttp"
"github.com/chainreactors/spray/core/pool"
"github.com/chainreactors/spray/pkg"
"github.com/chainreactors/words"
"github.com/chainreactors/words/rule"
"github.com/expr-lang/expr/vm"
"github.com/panjf2000/ants/v2"
"github.com/vbauerster/mpb/v8"
"github.com/vbauerster/mpb/v8/decor"
"net/http"
"strings"
"sync"
"time"
)
var (
MAX = 2147483647
)
type Runner struct {
*Option
taskCh chan *Task
poolwg *sync.WaitGroup
outwg *sync.WaitGroup
outputCh chan *baseline.Baseline
fuzzyCh chan *baseline.Baseline
bar *mpb.Bar
bruteMod bool
ProxyClient proxyclient.Dial
IsCheck bool
Pools *ants.PoolWithFunc
PoolName map[string]bool
Tasks *TaskGenerator
Rules *rule.Program
AppendRules *rule.Program
Headers map[string]string
FilterExpr *vm.Program
MatchExpr *vm.Program
RecursiveExpr *vm.Program
OutputFile *files.File
//FuzzyFile *files.File
DumpFile *files.File
StatFile *files.File
Progress *mpb.Progress
Fns []words.WordFunc
Count int // tasks total number
Wordlist []string
AppendWords []string
ClientType int
Probes []string
Total int // wordlist total number
Color bool
Jsonify bool
}
func (r *Runner) PrepareConfig() *pool.Config {
config := &pool.Config{
Thread: r.Threads,
Timeout: time.Duration(r.Timeout) * time.Second,
RateLimit: r.RateLimit,
Headers: make(http.Header),
Method: r.Method,
Mod: pool.ModMap[r.Mod],
OutputCh: r.outputCh,
FuzzyCh: r.fuzzyCh,
Outwg: r.outwg,
Fuzzy: r.Fuzzy,
CheckPeriod: r.CheckPeriod,
ErrPeriod: int32(r.ErrPeriod),
BreakThreshold: int32(r.BreakThreshold),
MatchExpr: r.MatchExpr,
FilterExpr: r.FilterExpr,
RecuExpr: r.RecursiveExpr,
AppendRule: r.AppendRules, // 对有效目录追加规则, 根据rule生成
AppendWords: r.AppendWords, // 对有效目录追加字典
Fns: r.Fns,
//IgnoreWaf: r.IgnoreWaf,
Crawl: r.CrawlPlugin,
Scope: r.Scope,
Active: r.Finger,
Bak: r.BakPlugin,
Fuzzuli: r.FuzzuliPlugin,
Common: r.CommonPlugin,
RetryLimit: r.RetryCount,
ClientType: r.ClientType,
RandomUserAgent: r.RandomUserAgent,
Random: r.Random,
Index: r.Index,
MaxRecursionDepth: r.Depth,
MaxRedirect: 3,
MaxAppendDepth: r.AppendDepth,
MaxCrawlDepth: r.CrawlDepth,
ProxyClient: r.ProxyClient,
}
if config.ClientType == ihttp.Auto {
if config.Mod == pool.PathSpray {
config.ClientType = ihttp.FAST
} else if config.Mod == pool.HostSpray {
config.ClientType = ihttp.STANDARD
}
}
for k, v := range r.Headers {
config.Headers.Set(k, v)
}
if config.Headers.Get("User-Agent") == "" {
config.Headers.Set("User-Agent", pkg.DefaultUserAgent)
}
if config.Headers.Get("Accept") == "" {
config.Headers.Set("Accept", "*/*")
}
return config
}
func (r *Runner) AppendFunction(fn func(string) []string) {
r.Fns = append(r.Fns, fn)
}
func (r *Runner) Prepare(ctx context.Context) error {
if r.bruteMod {
r.IsCheck = false
}
r.OutputHandler()
var err error
if r.IsCheck {
// 仅check, 类似httpx
r.Pools, err = ants.NewPoolWithFunc(1, func(i interface{}) {
config := r.PrepareConfig()
checkPool, err := pool.NewCheckPool(ctx, config)
if err != nil {
logs.Log.Error(err.Error())
checkPool.Cancel()
r.poolwg.Done()
return
}
ch := make(chan string)
go func() {
for t := range r.Tasks.tasks {
ch <- t.baseUrl
}
close(ch)
}()
checkPool.Worder = words.NewWorderWithChan(ch)
checkPool.Worder.Fns = r.Fns
checkPool.Bar = pkg.NewBar("check", r.Count-r.Offset, checkPool.Statistor, r.Progress)
checkPool.Run(ctx, r.Offset, r.Count)
r.poolwg.Done()
})
r.RunWithCheck(ctx)
} else {
// 完整探测模式
go func() {
for t := range r.Tasks.tasks {
r.taskCh <- t
}
close(r.taskCh)
}()
if r.Count > 0 {
r.newBar(r.Count)
}
r.Pools, err = ants.NewPoolWithFunc(r.PoolSize, func(i interface{}) {
t := i.(*Task)
if t.origin != nil && t.origin.End == t.origin.Total {
r.saveStat(t.origin.Json())
r.Done()
return
}
config := r.PrepareConfig()
config.BaseURL = t.baseUrl
brutePool, err := pool.NewBrutePool(ctx, config)
if err != nil {
logs.Log.Error(err.Error())
brutePool.Cancel()
r.Done()
return
}
if t.origin != nil && len(r.Wordlist) == 0 {
// 如果是从断点续传中恢复的任务, 则自动设置word,dict与rule, 不过优先级低于命令行参数
brutePool.Statistor = pkg.NewStatistorFromStat(t.origin.Statistor)
brutePool.Worder, err = t.origin.InitWorder(r.Fns)
if err != nil {
logs.Log.Error(err.Error())
r.Done()
return
}
brutePool.Statistor.Total = t.origin.sum
} else {
brutePool.Statistor = pkg.NewStatistor(t.baseUrl)
brutePool.Worder = words.NewWorderWithList(r.Wordlist)
brutePool.Worder.Fns = r.Fns
brutePool.Worder.Rules = r.Rules.Expressions
}
var limit int
if brutePool.Statistor.Total > r.Limit && r.Limit != 0 {
limit = r.Limit
} else {
limit = brutePool.Statistor.Total
}
brutePool.Bar = pkg.NewBar(config.BaseURL, limit-brutePool.Statistor.Offset, brutePool.Statistor, r.Progress)
logs.Log.Importantf("[pool] task: %s, total %d words, %d threads, proxy: %v",
brutePool.BaseURL, limit-brutePool.Statistor.Offset, brutePool.Thread, r.Proxies)
err = brutePool.Init()
if err != nil {
brutePool.Statistor.Error = err.Error()
if !r.Force {
// 如果没开启force, init失败将会关闭pool
brutePool.Bar.Close()
brutePool.Close()
r.PrintStat(brutePool)
r.Done()
return
}
}
brutePool.Run(brutePool.Statistor.Offset, limit)
if brutePool.IsFailed && len(brutePool.FailedBaselines) > 0 {
// 如果因为错误积累退出, end将指向第一个错误发生时, 防止resume时跳过大量目标
brutePool.Statistor.End = brutePool.FailedBaselines[0].Number
}
r.PrintStat(brutePool)
r.Done()
})
r.Run(ctx)
}
if err != nil {
return err
}
return nil
}
func (r *Runner) Run(ctx context.Context) {
Loop:
for {
select {
case <-ctx.Done():
// 如果超过了deadline, 尚未开始的任务都将被记录到stat中
if len(r.taskCh) > 0 {
for t := range r.taskCh {
stat := pkg.NewStatistor(t.baseUrl)
r.saveStat(stat.Json())
}
}
if r.StatFile != nil {
logs.Log.Importantf("already save all stat to %s", r.StatFile.Filename)
}
break Loop
case t, ok := <-r.taskCh:
if !ok {
break Loop
}
r.AddPool(t)
}
}
if r.bar != nil {
r.bar.Wait()
}
r.poolwg.Wait()
r.outwg.Wait()
}
func (r *Runner) RunWithCheck(ctx context.Context) {
stopCh := make(chan struct{})
r.poolwg.Add(1)
err := r.Pools.Invoke(struct{}{})
if err != nil {
return
}
go func() {
r.poolwg.Wait()
stopCh <- struct{}{}
}()
Loop:
for {
select {
case <-ctx.Done():
logs.Log.Error("cancel with deadline")
break Loop
case <-stopCh:
break Loop
}
}
r.outwg.Wait()
}
func (r *Runner) AddRecursive(bl *baseline.Baseline) {
// 递归新任务
task := &Task{
baseUrl: bl.UrlString,
depth: bl.RecuDepth + 1,
origin: NewOrigin(pkg.NewStatistor(bl.UrlString)),
}
r.AddPool(task)
}
func (r *Runner) AddPool(task *Task) {
// 递归新任务
if _, ok := r.PoolName[task.baseUrl]; ok {
logs.Log.Importantf("already added pool, skip %s", task.baseUrl)
return
}
task.depth++
r.poolwg.Add(1)
r.Pools.Invoke(task)
}
func (r *Runner) newBar(total int) {
if r.Progress == nil {
return
}
prompt := "total progressive:"
r.bar = r.Progress.AddBar(int64(total),
mpb.BarFillerClearOnComplete(), // 可选:当进度条完成时清除
mpb.PrependDecorators(
// 显示自定义的信息,比如下载速度和进度
decor.Name(prompt, decor.WC{W: len(prompt) + 1, C: decor.DindentRight}), // 这里调整了装饰器的参数
decor.OnComplete( // 当进度完成时显示的文本
decor.Counters(0, "% d/% d"), " done!",
),
),
mpb.AppendDecorators(
// 显示经过的时间
decor.Elapsed(decor.ET_STYLE_GO, decor.WC{W: 4}),
),
)
}
func (r *Runner) Done() {
if r.bar != nil {
r.bar.Increment()
}
r.poolwg.Done()
}
func (r *Runner) PrintStat(pool *pool.BrutePool) {
if r.Color {
logs.Log.Important(pool.Statistor.ColorString())
if pool.Statistor.Error == "" {
logs.Log.Log(pkg.LogVerbose, pool.Statistor.ColorCountString())
logs.Log.Log(pkg.LogVerbose, pool.Statistor.ColorSourceString())
}
} else {
logs.Log.Important(pool.Statistor.String())
if pool.Statistor.Error == "" {
logs.Log.Log(pkg.LogVerbose, pool.Statistor.CountString())
logs.Log.Log(pkg.LogVerbose, pool.Statistor.SourceString())
}
}
r.saveStat(pool.Statistor.Json())
}
func (r *Runner) saveStat(content string) {
if r.StatFile != nil {
r.StatFile.SafeWrite(content)
r.StatFile.SafeSync()
}
}
func (r *Runner) Output(bl *baseline.Baseline) {
var out string
if r.Option.Json {
out = bl.ToJson()
} else if len(r.Probes) > 0 {
out = bl.ProbeOutput(r.Probes)
} else if r.Color {
out = bl.ColorString()
} else {
out = bl.String()
}
if bl.IsValid {
logs.Log.Console(out + "\n")
} else if r.Fuzzy && bl.IsFuzzy {
logs.Log.Console("[fuzzy] " + out + "\n")
}
if r.OutputFile != nil {
if r.FileOutput == "json" {
r.OutputFile.SafeWrite(bl.ToJson() + "\n")
} else if r.FileOutput == "csv" {
r.OutputFile.SafeWrite(bl.ToCSV())
} else if r.FileOutput == "full" {
r.OutputFile.SafeWrite(bl.String() + "\n")
} else {
r.OutputFile.SafeWrite(bl.ProbeOutput(strings.Split(r.FileOutput, ",")) + "\n")
}
r.OutputFile.SafeSync()
}
}
func (r *Runner) OutputHandler() {
go func() {
for {
select {
case bl, ok := <-r.outputCh:
if !ok {
return
}
if r.DumpFile != nil {
r.DumpFile.SafeWrite(bl.ToJson() + "\n")
r.DumpFile.SafeSync()
}
if bl.IsValid {
r.Output(bl)
if bl.Recu {
r.AddRecursive(bl)
}
} else {
if r.Color {
logs.Log.Debug(bl.ColorString())
} else {
logs.Log.Debug(bl.String())
}
}
r.outwg.Done()
}
}
}()
go func() {
for {
select {
case bl, ok := <-r.fuzzyCh:
if !ok {
return
}
r.Output(bl)
r.outwg.Done()
}
}
}()
}

View File

@ -1,73 +0,0 @@
package core
import (
"fmt"
"github.com/chainreactors/logs"
"github.com/chainreactors/utils"
"github.com/chainreactors/words/rule"
"net/url"
)
type Task struct {
baseUrl string
depth int
rule []rule.Expression
origin *Origin
}
func NewTaskGenerator(port string) *TaskGenerator {
gen := &TaskGenerator{
ports: utils.ParsePortsString(port),
tasks: make(chan *Task),
In: make(chan *Task),
}
go func() {
for task := range gen.In {
gen.tasks <- task
}
close(gen.tasks)
}()
return gen
}
type TaskGenerator struct {
Name string
ports []string
tasks chan *Task
In chan *Task
}
func (gen *TaskGenerator) Run(baseurl string) {
parsed, err := url.Parse(baseurl)
if err != nil {
logs.Log.Warnf("parse %s, %s ", baseurl, err.Error())
return
}
if parsed.Scheme == "" {
if parsed.Port() == "443" {
parsed.Scheme = "https"
} else {
parsed.Scheme = "http"
}
}
if len(gen.ports) == 0 {
gen.In <- &Task{baseUrl: parsed.String()}
return
}
for _, p := range gen.ports {
if parsed.Host == "" {
gen.In <- &Task{baseUrl: fmt.Sprintf("%s://%s:%s", parsed.Scheme, parsed.Path, p)}
} else {
gen.In <- &Task{baseUrl: fmt.Sprintf("%s://%s:%s/%s", parsed.Scheme, parsed.Host, p, parsed.Path)}
}
}
}
func (gen *TaskGenerator) Close() {
close(gen.tasks)
}

View File

@ -1,37 +0,0 @@
package core
import (
"github.com/chainreactors/spray/pkg"
"github.com/chainreactors/words"
)
func NewOrigin(stat *pkg.Statistor) *Origin {
return &Origin{Statistor: stat}
}
type Origin struct {
*pkg.Statistor
sum int
}
func (o *Origin) InitWorder(fns []words.WordFunc) (*words.Worder, error) {
var worder *words.Worder
wl, err := pkg.LoadWordlist(o.Word, o.Dictionaries)
if err != nil {
return nil, err
}
worder = words.NewWorderWithList(wl)
worder.Fns = fns
rules, err := pkg.LoadRuleWithFiles(o.RuleFiles, o.RuleFilter)
if err != nil {
return nil, err
}
worder.Rules = rules
if len(rules) > 0 {
o.sum = len(rules) * len(wl)
} else {
o.sum = len(wl)
}
return worder, nil
}

79
go.mod
View File

@ -1,67 +1,36 @@
module github.com/chainreactors/spray
go 1.20
go 1.17
require (
github.com/chainreactors/files v0.0.0-20240716182835-7884ee1e77f0
github.com/chainreactors/fingers v0.0.0-20240716172449-2fc3147b9c2a
github.com/chainreactors/logs v0.0.0-20241115105204-6132e39f5261
github.com/chainreactors/parsers v0.0.0-20250605044448-6bc270f12c0e
github.com/chainreactors/proxyclient v1.0.3-0.20250219180226-a25a0c9e6ac8
github.com/chainreactors/utils v0.0.0-20240805193040-ff3b97aa3c3f
github.com/chainreactors/words v0.0.0-20240910083848-19a289e8984b
github.com/charmbracelet/lipgloss v0.13.0
github.com/expr-lang/expr v1.16.9
github.com/gookit/config/v2 v2.2.5
github.com/jessevdk/go-flags v1.5.0
github.com/panjf2000/ants/v2 v2.9.1
github.com/valyala/fasthttp v1.53.0
github.com/vbauerster/mpb/v8 v8.7.3
golang.org/x/time v0.5.0
sigs.k8s.io/yaml v1.4.0
github.com/chainreactors/files v0.2.5-0.20221212083256-16ee4c1ae47e
github.com/chainreactors/go-metrics v0.0.0-20220926021830-24787b7a10f8
github.com/chainreactors/gogo/v2 v2.10.1
github.com/chainreactors/ipcs v0.0.13
github.com/chainreactors/logs v0.7.1-0.20221214153111-85f123ff6580
github.com/chainreactors/parsers v0.2.9-0.20221210155102-cc0814762410
github.com/chainreactors/words v0.3.2-0.20230105161651-7c1fc4c9605a
)
require (
dario.cat/mergo v1.0.0 // indirect
github.com/VividCortex/ewma v1.2.0 // indirect
github.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d // indirect
github.com/andybalholm/brotli v1.1.0 // indirect
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
github.com/charmbracelet/x/ansi v0.1.4 // indirect
github.com/facebookincubator/nvdtools v0.1.5 // indirect
github.com/fatih/color v1.17.0 // indirect
github.com/antonmedv/expr v1.9.0
github.com/gosuri/uiprogress v0.0.1
github.com/jessevdk/go-flags v1.5.0
github.com/panjf2000/ants/v2 v2.7.0
github.com/valyala/fasthttp v1.43.0
sigs.k8s.io/yaml v1.3.0
)
require (
github.com/andybalholm/brotli v1.0.4 // indirect
github.com/go-dedup/megophone v0.0.0-20170830025436-f01be21026f5 // indirect
github.com/go-dedup/simhash v0.0.0-20170904020510-9ecaca7b509c // indirect
github.com/go-dedup/text v0.0.0-20170907015346-8bb1b95e3cb7 // indirect
github.com/go-playground/validator/v10 v10.20.0 // indirect
github.com/goccy/go-yaml v1.11.3 // indirect
github.com/gookit/color v1.5.4 // indirect
github.com/gookit/goutil v0.6.15 // indirect
github.com/klauspost/compress v1.17.8 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.15 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/muesli/termenv v0.15.2 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/riobard/go-bloom v0.0.0-20200614022211-cdc8013cb5b3 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/rogpeppe/go-internal v1.12.0 // indirect
github.com/shadowsocks/go-shadowsocks2 v0.1.5 // indirect
github.com/twmb/murmur3 v1.1.8 // indirect
github.com/gosuri/uilive v0.0.4 // indirect
github.com/klauspost/compress v1.15.10 // indirect
github.com/mattn/go-isatty v0.0.16 // indirect
github.com/twmb/murmur3 v1.1.6 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
golang.org/x/crypto v0.33.0 // indirect
golang.org/x/exp v0.0.0-20240506185415-9bf2ced13842 // indirect
golang.org/x/net v0.25.0 // indirect
golang.org/x/sync v0.11.0 // indirect
golang.org/x/sys v0.30.0 // indirect
golang.org/x/term v0.29.0 // indirect
golang.org/x/text v0.22.0 // indirect
golang.org/x/xerrors v0.0.0-20231012003039-104605ab7028 // indirect
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
golang.org/x/sys v0.2.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
)
replace github.com/chainreactors/proxyclient => github.com/chainreactors/proxyclient v1.0.3

1018
go.sum

File diff suppressed because it is too large Load Diff

117
internal/checkpool.go Normal file
View File

@ -0,0 +1,117 @@
package internal
import (
"context"
"fmt"
"github.com/chainreactors/logs"
"github.com/chainreactors/spray/pkg"
"github.com/chainreactors/spray/pkg/ihttp"
"github.com/chainreactors/words"
"github.com/panjf2000/ants/v2"
"github.com/valyala/fasthttp"
"sync"
"time"
)
func NewCheckPool(ctx context.Context, config *pkg.Config) (*CheckPool, error) {
pctx, cancel := context.WithCancel(ctx)
pool := &CheckPool{
Config: config,
ctx: pctx,
cancel: cancel,
client: ihttp.NewClient(config.Thread, 2, config.ClientType),
wg: sync.WaitGroup{},
reqCount: 1,
failedCount: 1,
}
p, _ := ants.NewPoolWithFunc(config.Thread, func(i interface{}) {
unit := i.(*Unit)
req, err := pool.genReq(unit.path)
if err != nil {
logs.Log.Error(err.Error())
}
start := time.Now()
var bl *pkg.Baseline
resp, reqerr := pool.client.Do(pctx, req)
if pool.ClientType == ihttp.FAST {
defer fasthttp.ReleaseResponse(resp.FastResponse)
defer fasthttp.ReleaseRequest(req.FastRequest)
}
if reqerr != nil && reqerr != fasthttp.ErrBodyTooLarge {
pool.failedCount++
bl = &pkg.Baseline{UrlString: pool.BaseURL + unit.path, IsValid: false, ErrString: reqerr.Error(), Reason: ErrRequestFailed.Error()}
} else {
bl = pkg.NewBaseline(req.URI(), req.Host(), resp)
bl.Collect()
}
bl.Spended = time.Since(start).Milliseconds()
pool.OutputCh <- bl
pool.reqCount++
pool.wg.Done()
pool.bar.Done()
})
pool.pool = p
return pool, nil
}
type CheckPool struct {
*pkg.Config
client *ihttp.Client
pool *ants.PoolWithFunc
bar *pkg.Bar
ctx context.Context
cancel context.CancelFunc
reqCount int
failedCount int
worder *words.Worder
wg sync.WaitGroup
}
func (p *CheckPool) Close() {
p.bar.Close()
}
func (p *CheckPool) genReq(s string) (*ihttp.Request, error) {
if p.Mod == pkg.HostSpray {
return ihttp.BuildHostRequest(p.ClientType, p.BaseURL, s)
} else if p.Mod == pkg.PathSpray {
return ihttp.BuildPathRequest(p.ClientType, p.BaseURL, s)
}
return nil, fmt.Errorf("unknown mod")
}
func (p *CheckPool) Run(ctx context.Context, offset, limit int) {
p.worder.Run()
Loop:
for {
select {
case u, ok := <-p.worder.C:
if !ok {
break Loop
}
if p.reqCount < offset {
p.reqCount++
continue
}
if p.reqCount > limit {
break Loop
}
p.wg.Add(1)
_ = p.pool.Invoke(newUnit(u, WordSource))
case <-ctx.Done():
break Loop
case <-p.ctx.Done():
break Loop
}
}
p.wg.Wait()
p.Close()
}

29
internal/format.go Normal file
View File

@ -0,0 +1,29 @@
package internal
import (
"bytes"
"encoding/json"
"github.com/chainreactors/logs"
"github.com/chainreactors/spray/pkg"
"io/ioutil"
)
func Format(filename string) {
content, err := ioutil.ReadFile(filename)
if err != nil {
return
}
var results []*pkg.Baseline
for _, line := range bytes.Split(bytes.TrimSpace(content), []byte("\n")) {
var result pkg.Baseline
err := json.Unmarshal(line, &result)
if err != nil {
logs.Log.Error(err.Error())
return
}
results = append(results, &result)
}
for _, result := range results {
logs.Log.Info(result.String())
}
}

509
internal/option.go Normal file
View File

@ -0,0 +1,509 @@
package internal
import (
"fmt"
"github.com/antonmedv/expr"
"github.com/chainreactors/files"
"github.com/chainreactors/logs"
"github.com/chainreactors/spray/pkg"
"github.com/chainreactors/words/mask"
"github.com/chainreactors/words/rule"
"github.com/gosuri/uiprogress"
"io/ioutil"
"net/url"
"os"
"strconv"
"strings"
)
type Option struct {
InputOptions `group:"Input Options"`
FunctionOptions `group:"Function Options"`
OutputOptions `group:"Output Options"`
RequestOptions `group:"Request Options"`
ModeOptions `group:"Modify Options"`
MiscOptions `group:"Miscellaneous Options"`
}
type InputOptions struct {
ResumeFrom string `long:"resume"`
URL []string `short:"u" long:"url" description:"String, Multi, input baseurl, e.g.: http://google.com"`
URLFile string `short:"l" long:"list" description:"File, input filename"`
Raw string `long:"raw" description:"File, input raw request filename"`
Offset int `long:"offset" description:"Int, wordlist offset"`
Limit int `long:"limit" description:"Int, wordlist limit, start with offset. e.g.: --offset 1000 --limit 100"`
Dictionaries []string `short:"d" long:"dict" description:"Files, Multi,dict files, e.g.: -d 1.txt -d 2.txt"`
Word string `short:"w" long:"word" description:"String, word generate dsl, e.g.: -w test{?ld#4}"`
Rules []string `short:"r" long:"rules" description:"Files, Multi, rule files, e.g.: -r rule1.txt -r rule2.txt"`
AppendRule []string `long:"append-rule" description:"File, when found valid path , use append rule generator new word with current path"`
FilterRule string `long:"filter-rule" description:"String, filter rule, e.g.: --rule-filter '>8 <4'"`
}
type FunctionOptions struct {
Extensions string `short:"e" long:"extension" description:"String, add extensions (separated by commas), e.g.: -e jsp,jspx"`
ExcludeExtensions string `long:"exclude-extension" description:"String, exclude extensions (separated by commas), e.g.: --exclude-extension jsp,jspx"`
RemoveExtensions string `long:"remove-extension" description:"String, remove extensions (separated by commas), e.g.: --remove-extension jsp,jspx"`
Uppercase bool `short:"U" long:"uppercase" desvcription:"Bool, upper wordlist, e.g.: --uppercase"`
Lowercase bool `short:"L" long:"lowercase" description:"Bool, lower wordlist, e.g.: --lowercase"`
Prefixes []string `long:"prefix" description:"Strings, Multi, add prefix, e.g.: --prefix aaa --prefix bbb"`
Suffixes []string `long:"suffix" description:"Strings, Multi, add suffix, e.g.: --suffix aaa --suffix bbb"`
Replaces map[string]string `long:"replace" description:"Strings, Multi, replace string, e.g.: --replace aaa:bbb --replace ccc:ddd"`
}
type OutputOptions struct {
Match string `long:"match" description:"String, custom match function, e.g.: --match current.Status != 200" json:"match,omitempty"`
Filter string `long:"filter" description:"String, custom filter function, e.g.: --filter current.Body contains 'hello'" json:"filter,omitempty"`
Extracts []string `long:"extract" description:"String, Multi, extract response, e.g.: --extract js --extract ip --extract version:(.*?)" json:"extracts,omitempty"`
OutputFile string `short:"f" long:"file" description:"String, output filename" json:"output_file,omitempty"`
Format string `short:"F" long:"format" description:"String, output format, e.g.: --format 1.json"`
FuzzyFile string `long:"fuzzy-file" description:"String, fuzzy output filename" json:"fuzzy_file,omitempty"`
DumpFile string `long:"dump-file" description:"String, dump all request, and write to filename"`
Dump bool `long:"dump" description:"Bool, dump all request"`
AutoFile bool `long:"auto-file" description:"Bool, auto generator output and fuzzy filename" `
Fuzzy bool `long:"fuzzy" description:"String, open fuzzy output" json:"fuzzy,omitempty"`
OutputProbe string `short:"o" long:"probe" description:"String, output format" json:"output_probe,omitempty"`
}
type RequestOptions struct {
Headers []string `long:"header" description:"String, Multi, custom headers, e.g.: --headers 'Auth: example_auth'"`
UserAgent string `long:"user-agent" description:"String, custom user-agent, e.g.: --user-agent Custom"`
RandomUserAgent bool `long:"random-agent" description:"Bool, use random with default user-agent"`
Cookie []string `long:"cookie" description:"String, Multi, custom cookie"`
ReadAll bool `long:"read-all" description:"Bool, read all response body"`
MaxBodyLength int `long:"max-length" default:"100" description:"Int, max response body length (kb), default 100k, e.g. -max-length 1000"`
}
type ModeOptions struct {
Advance bool `short:"a" long:"advance" description:"Bool, enable crawl and active"`
Active bool `long:"active" description:"Bool, enable active finger detect"`
Crawl bool `long:"crawl" description:"Bool, enable crawl"`
Bak bool `long:"bak" description:"Bool, enable bak found"`
FileBak bool `long:"file-bak" description:"Bool, enable valid result bak found, equal --append-rule rule/filebak.txt"`
Common bool `long:"common" description:"Bool, enable common file found"`
Force bool `long:"force" description:"Bool, skip error break"`
CheckOnly bool `long:"check-only" description:"Bool, check only"`
Recursive string `long:"recursive" default:"current.IsDir()" description:"String,custom recursive rule, e.g.: --recursive current.IsDir()"`
Depth int `long:"depth" default:"0" description:"Int, recursive depth"`
CrawlDepth int `long:"crawl-depth" default:"3" description:"Int, crawl depth"`
CheckPeriod int `long:"check-period" default:"200" description:"Int, check period when request"`
ErrPeriod int `long:"error-period" default:"10" description:"Int, check period when error"`
BreakThreshold int `long:"error-threshold" default:"20" description:"Int, break when the error exceeds the threshold "`
BlackStatus string `long:"black-status" default:"404,400,410" description:"Strings (comma split),custom black status, "`
WhiteStatus string `long:"white-status" default:"200" description:"Strings (comma split), custom white status"`
FuzzyStatus string `long:"fuzzy-status" default:"403,500,501,502,503" description:"Strings (comma split), custom fuzzy status"`
SimhashDistance int `long:"distance" default:"5"`
}
type MiscOptions struct {
Deadline int `long:"deadline" default:"999999" description:"Int, deadline (seconds)"` // todo 总的超时时间,适配云函数的deadline
Timeout int `long:"timeout" default:"2" description:"Int, timeout with request (seconds)"`
PoolSize int `short:"p" long:"pool" default:"5" description:"Int, Pool size"`
Threads int `short:"t" long:"thread" default:"20" description:"Int, number of threads per pool"`
Debug bool `long:"debug" description:"Bool, output debug info"`
Quiet bool `short:"q" long:"quiet" description:"Bool, Quiet"`
NoColor bool `long:"no-color" description:"Bool, no color"`
NoBar bool `long:"no-bar" description:"Bool, No progress bar"`
Mod string `short:"m" long:"mod" default:"path" choice:"path" choice:"host" description:"String, path/host spray"`
Client string `short:"c" long:"client" default:"auto" choice:"fast" choice:"standard" choice:"auto" description:"String, Client type"`
}
func (opt *Option) PrepareRunner() (*Runner, error) {
ok := opt.Validate()
if !ok {
return nil, fmt.Errorf("validate failed")
}
var err error
r := &Runner{
Progress: uiprogress.New(),
Threads: opt.Threads,
PoolSize: opt.PoolSize,
Mod: opt.Mod,
Timeout: opt.Timeout,
Deadline: opt.Deadline,
Headers: make(map[string]string),
Offset: opt.Offset,
Total: opt.Limit,
taskCh: make(chan *Task),
OutputCh: make(chan *pkg.Baseline, 100),
FuzzyCh: make(chan *pkg.Baseline, 100),
Fuzzy: opt.Fuzzy,
Force: opt.Force,
CheckOnly: opt.CheckOnly,
CheckPeriod: opt.CheckPeriod,
ErrPeriod: opt.ErrPeriod,
BreakThreshold: opt.BreakThreshold,
Crawl: opt.Crawl,
Active: opt.Active,
Bak: opt.Bak,
Common: opt.Common,
}
// log and bar
if !opt.NoColor {
logs.Log.Color = true
logs.DefaultColorMap[logs.Info] = logs.PurpleBold
logs.DefaultColorMap[logs.Important] = logs.Green
r.Color = true
}
if opt.Quiet {
logs.Log.Quiet = true
logs.Log.Color = false
r.Color = false
}
if !(opt.Quiet || opt.NoBar) {
r.Progress.Start()
logs.Log.Writer = r.Progress.Bypass()
}
// configuration
if opt.Force {
// 如果开启了force模式, 将关闭check机制, err积累到一定数量自动退出机制
r.BreakThreshold = max
r.CheckPeriod = max
r.ErrPeriod = max
}
if opt.Advance {
r.Crawl = true
r.Active = true
r.Bak = true
r.Common = true
opt.AppendRule = append(opt.AppendRule, "filebak")
} else if opt.FileBak {
opt.AppendRule = append(opt.AppendRule, "filebak")
}
if opt.BlackStatus != "" {
for _, s := range strings.Split(opt.BlackStatus, ",") {
si, err := strconv.Atoi(s)
if err != nil {
return nil, err
}
BlackStatus = append(BlackStatus, si)
}
}
if opt.WhiteStatus != "" {
for _, s := range strings.Split(opt.WhiteStatus, ",") {
si, err := strconv.Atoi(s)
if err != nil {
return nil, err
}
WhiteStatus = append(WhiteStatus, si)
}
}
if opt.FuzzyStatus != "" {
for _, s := range strings.Split(opt.FuzzyStatus, ",") {
si, err := strconv.Atoi(s)
if err != nil {
return nil, err
}
FuzzyStatus = append(FuzzyStatus, si)
}
}
// prepare word
dicts := make([][]string, len(opt.Dictionaries))
for i, f := range opt.Dictionaries {
dicts[i], err = loadFileToSlice(f)
if opt.ResumeFrom != "" {
dictCache[f] = dicts[i]
}
if err != nil {
return nil, err
}
logs.Log.Importantf("Loaded %d word from %s", len(dicts[i]), f)
}
if len(opt.Dictionaries) == 0 && opt.Word == "" {
// 用来仅使用高级功能下, 防止无字典报错.
opt.Word = "/"
} else {
opt.Word = "{?"
for i, _ := range dicts {
opt.Word += strconv.Itoa(i)
}
opt.Word += "}"
}
if opt.Suffixes != nil {
mask.SpecialWords["suffix"] = opt.Suffixes
opt.Word += "{@suffix}"
}
if opt.Prefixes != nil {
mask.SpecialWords["prefix"] = opt.Prefixes
opt.Word = "{@prefix}" + opt.Word
}
if opt.Extensions != "" {
exts := strings.Split(opt.Extensions, ",")
for i, e := range exts {
if !strings.HasPrefix(e, ".") {
exts[i] = "." + e
}
}
mask.SpecialWords["ext"] = exts
opt.Word += "{@ext}"
}
r.Wordlist, err = mask.Run(opt.Word, dicts, nil)
if err != nil {
return nil, err
}
if len(r.Wordlist) > 0 {
logs.Log.Importantf("Parsed %d words by %s", len(r.Wordlist), opt.Word)
}
if opt.Rules != nil {
rules, err := loadFileAndCombine(opt.Rules)
if err != nil {
return nil, err
}
r.Rules = rule.Compile(rules, opt.FilterRule)
} else if opt.FilterRule != "" {
// if filter rule is not empty, set rules to ":", force to open filter mode
r.Rules = rule.Compile(":", opt.FilterRule)
} else {
r.Rules = new(rule.Program)
}
if len(r.Rules.Expressions) > 0 {
r.Total = len(r.Wordlist) * len(r.Rules.Expressions)
} else {
r.Total = len(r.Wordlist)
}
pkg.DefaultStatistor = pkg.Statistor{
Word: opt.Word,
WordCount: len(r.Wordlist),
Dictionaries: opt.Dictionaries,
Offset: opt.Offset,
RuleFiles: opt.Rules,
RuleFilter: opt.FilterRule,
Total: r.Total,
}
if opt.AppendRule != nil {
content, err := loadFileAndCombine(opt.AppendRule)
if err != nil {
return nil, err
}
r.AppendRules = rule.Compile(string(content), "")
}
// prepare task
var tasks []*Task
var taskfrom string
if opt.ResumeFrom != "" {
stats, err := pkg.ReadStatistors(opt.ResumeFrom)
if err != nil {
return nil, err
}
taskfrom = "resume " + opt.ResumeFrom
for _, stat := range stats {
task := &Task{baseUrl: stat.BaseUrl, origin: stat}
tasks = append(tasks, task)
}
} else {
var file *os.File
var urls []string
if len(opt.URL) == 1 {
u, err := url.Parse(opt.URL[0])
if err != nil {
u, _ = url.Parse("http://" + opt.URL[0])
}
urls = append(urls, u.String())
tasks = append(tasks, &Task{baseUrl: opt.URL[0]})
taskfrom = u.Host
} else if len(opt.URL) > 1 {
for _, u := range opt.URL {
urls = append(urls, u)
tasks = append(tasks, &Task{baseUrl: u})
}
taskfrom = "cmd"
} else if opt.URLFile != "" {
file, err = os.Open(opt.URLFile)
if err != nil {
return nil, err
}
taskfrom = opt.URLFile
} else if pkg.HasStdin() {
file = os.Stdin
taskfrom = "stdin"
}
if file != nil {
content, err := ioutil.ReadAll(file)
if err != nil {
return nil, err
}
urls := strings.Split(strings.TrimSpace(string(content)), "\n")
for _, u := range urls {
tasks = append(tasks, &Task{baseUrl: strings.TrimSpace(u)})
}
}
if opt.CheckOnly {
r.URLList = urls
r.Total = len(r.URLList)
}
}
r.Tasks = tasks
logs.Log.Importantf("Loaded %d urls from %s", len(tasks), taskfrom)
if opt.Uppercase {
r.Fns = append(r.Fns, strings.ToUpper)
}
if opt.Lowercase {
r.Fns = append(r.Fns, strings.ToLower)
}
if opt.RemoveExtensions != "" {
rexts := strings.Split(opt.ExcludeExtensions, ",")
r.Fns = append(r.Fns, func(s string) string {
if ext := parseExtension(s); pkg.StringsContains(rexts, ext) {
return strings.TrimSuffix(s, "."+ext)
}
return s
})
}
if opt.ExcludeExtensions != "" {
exexts := strings.Split(opt.ExcludeExtensions, ",")
r.Fns = append(r.Fns, func(s string) string {
if ext := parseExtension(s); pkg.StringsContains(exexts, ext) {
return ""
}
return s
})
}
if len(opt.Replaces) > 0 {
r.Fns = append(r.Fns, func(s string) string {
for k, v := range opt.Replaces {
s = strings.Replace(s, k, v, -1)
}
return s
})
}
logs.Log.Importantf("Loaded %d dictionaries and %d decorators", len(opt.Dictionaries), len(r.Fns))
if opt.Match != "" {
exp, err := expr.Compile(opt.Match)
if err != nil {
return nil, err
}
r.MatchExpr = exp
}
if opt.Filter != "" {
exp, err := expr.Compile(opt.Filter)
if err != nil {
return nil, err
}
r.FilterExpr = exp
}
if opt.Recursive != "current.IsDir()" {
maxRecursion = 1
exp, err := expr.Compile(opt.Recursive)
if err != nil {
return nil, err
}
r.RecursiveExpr = exp
}
if opt.Depth != 0 {
maxRecursion = opt.Depth
}
// prepare header
for _, h := range opt.Headers {
i := strings.Index(h, ":")
if i == -1 {
logs.Log.Warn("invalid header")
} else {
r.Headers[h[:i]] = h[i+2:]
}
}
if opt.UserAgent != "" {
r.Headers["User-Agent"] = opt.UserAgent
}
if opt.Cookie != nil {
r.Headers["Cookie"] = strings.Join(opt.Cookie, "; ")
}
if opt.OutputProbe != "" {
r.Probes = strings.Split(opt.OutputProbe, ",")
}
if opt.OutputFile != "" {
r.OutputFile, err = files.NewFile(opt.OutputFile, false, false, true)
if err != nil {
return nil, err
}
} else if opt.AutoFile {
r.OutputFile, err = files.NewFile("result.json", true, false, true)
if err != nil {
return nil, err
}
}
if opt.FuzzyFile != "" {
r.FuzzyFile, err = files.NewFile(opt.FuzzyFile, false, false, true)
if err != nil {
return nil, err
}
} else if opt.AutoFile {
r.FuzzyFile, err = files.NewFile("fuzzy.json", true, false, true)
if err != nil {
return nil, err
}
}
if opt.DumpFile != "" {
r.DumpFile, err = files.NewFile(opt.DumpFile, false, false, true)
if err != nil {
return nil, err
}
} else if opt.Dump {
r.DumpFile, err = files.NewFile("dump.json", true, false, true)
if err != nil {
return nil, err
}
}
if opt.ResumeFrom != "" {
r.StatFile, err = files.NewFile(opt.ResumeFrom, false, true, true)
} else {
r.StatFile, err = files.NewFile(taskfrom+".stat", false, true, true)
}
if err != nil {
return nil, err
}
r.StatFile.Mod = os.O_WRONLY | os.O_CREATE
err = r.StatFile.Init()
if err != nil {
return nil, err
}
return r, nil
}
func (opt *Option) Validate() bool {
if opt.Uppercase && opt.Lowercase {
logs.Log.Error("Cannot set -U and -L at the same time")
return false
}
if (opt.Offset != 0 || opt.Limit != 0) && opt.Depth > 0 {
// 偏移和上限与递归同时使用时也会造成混淆.
logs.Log.Error("--offset and --limit cannot be used with --depth at the same time")
return false
}
if opt.Depth > 0 && opt.ResumeFrom != "" {
// 递归与断点续传会造成混淆, 断点续传的word与rule不是通过命令行获取的
logs.Log.Error("--resume and --depth cannot be used at the same time")
return false
}
return true
}

699
internal/pool.go Normal file
View File

@ -0,0 +1,699 @@
package internal
import (
"context"
"fmt"
"github.com/antonmedv/expr"
"github.com/antonmedv/expr/vm"
"github.com/chainreactors/logs"
"github.com/chainreactors/spray/pkg"
"github.com/chainreactors/spray/pkg/ihttp"
"github.com/chainreactors/words"
"github.com/chainreactors/words/mask"
"github.com/chainreactors/words/rule"
"github.com/panjf2000/ants/v2"
"github.com/valyala/fasthttp"
"net/url"
"path"
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
)
var (
max = 2147483647
maxRedirect = 3
maxCrawl = 3
maxRecursion = 0
nilBaseline = &pkg.Baseline{}
)
func NewPool(ctx context.Context, config *pkg.Config) (*Pool, error) {
pctx, cancel := context.WithCancel(ctx)
pool := &Pool{
Config: config,
ctx: pctx,
cancel: cancel,
client: ihttp.NewClient(config.Thread, 2, config.ClientType),
baselines: make(map[int]*pkg.Baseline),
urls: make(map[string]int),
tempCh: make(chan *pkg.Baseline, config.Thread),
checkCh: make(chan int),
additionCh: make(chan *Unit, 100),
wg: sync.WaitGroup{},
initwg: sync.WaitGroup{},
reqCount: 1,
failedCount: 1,
}
p, _ := ants.NewPoolWithFunc(config.Thread, pool.Invoke)
pool.reqPool = p
// 挂起一个异步的处理结果线程, 不干扰主线程的请求并发
go func() {
for bl := range pool.tempCh {
if bl.IsValid {
pool.addFuzzyBaseline(bl)
}
if _, ok := pool.Statistor.Counts[bl.Status]; ok {
pool.Statistor.Counts[bl.Status]++
} else {
pool.Statistor.Counts[bl.Status] = 1
}
var params map[string]interface{}
if pool.MatchExpr != nil || pool.FilterExpr != nil || pool.RecuExpr != nil {
params = map[string]interface{}{
"index": pool.index,
"random": pool.random,
"current": bl,
}
for _, status := range FuzzyStatus {
if bl, ok := pool.baselines[status]; ok {
params["bl"+strconv.Itoa(status)] = bl
} else {
params["bl"+strconv.Itoa(status)] = nilBaseline
}
}
}
var status bool
if pool.MatchExpr != nil {
if CompareWithExpr(pool.MatchExpr, params) {
status = true
}
} else {
if pool.BaseCompare(bl) {
status = true
}
}
if status {
pool.Statistor.FoundNumber++
if pool.FilterExpr != nil && CompareWithExpr(pool.FilterExpr, params) {
pool.Statistor.FilteredNumber++
bl.Reason = ErrCustomFilter.Error()
bl.IsValid = false
}
} else {
bl.IsValid = false
}
// 如果要进行递归判断, 要满足 bl有效, mod为path-spray, 当前深度小于最大递归深度
if bl.IsValid {
pool.wg.Add(2)
pool.doCrawl(bl)
pool.doRule(bl)
if bl.RecuDepth < maxRecursion {
if CompareWithExpr(pool.RecuExpr, params) {
bl.Recu = true
}
}
}
pool.OutputCh <- bl
pool.wg.Done()
}
pool.analyzeDone = true
}()
return pool, nil
}
type Pool struct {
*pkg.Config
Statistor *pkg.Statistor
client *ihttp.Client
reqPool *ants.PoolWithFunc
bar *pkg.Bar
ctx context.Context
cancel context.CancelFunc
tempCh chan *pkg.Baseline // 待处理的baseline
checkCh chan int // 独立的check管道 防止与redirect/crawl冲突
additionCh chan *Unit
reqCount int
failedCount int
isFailed bool
failedBaselines []*pkg.Baseline
random *pkg.Baseline
index *pkg.Baseline
baselines map[int]*pkg.Baseline
urls map[string]int
analyzeDone bool
worder *words.Worder
locker sync.Mutex
wg sync.WaitGroup
initwg sync.WaitGroup // 初始化用, 之后改成锁
}
func (pool *Pool) Init() error {
// 分成两步是为了避免闭包的线程安全问题
pool.initwg.Add(2)
pool.reqPool.Invoke(newUnit("", InitIndexSource))
pool.reqPool.Invoke(newUnit(pkg.RandPath(), InitRandomSource))
pool.initwg.Wait()
if pool.index.ErrString != "" {
return fmt.Errorf(pool.index.String())
}
logs.Log.Info("[baseline.index] " + pool.index.Format([]string{"status", "length", "spend", "title", "frame", "redirect"}))
// 检测基本访问能力
if pool.random.ErrString != "" {
return fmt.Errorf(pool.random.String())
}
logs.Log.Info("[baseline.random] " + pool.random.Format([]string{"status", "length", "spend", "title", "frame", "redirect"}))
if pool.random.RedirectURL != "" {
// 自定协议升级
// 某些网站http会重定向到https, 如果发现随机目录出现这种情况, 则自定将baseurl升级为https
rurl, err := url.Parse(pool.random.RedirectURL)
if err == nil && rurl.Hostname() == pool.random.Url.Hostname() && pool.random.Url.Scheme == "http" && rurl.Scheme == "https" {
logs.Log.Infof("baseurl %s upgrade http to https", pool.BaseURL)
pool.BaseURL = strings.Replace(pool.BaseURL, "http", "https", 1)
}
}
return nil
}
func (pool *Pool) checkRedirect(redirectURL string) bool {
if pool.random.RedirectURL == "" {
// 如果random的redirectURL为空, 此时该项
return true
}
if redirectURL == pool.random.RedirectURL {
// 相同的RedirectURL将被认为是无效数据
return false
} else {
// path为3xx, 且与baseline中的RedirectURL不同时, 为有效数据
return true
}
}
func (pool *Pool) genReq(s string) (*ihttp.Request, error) {
if pool.Mod == pkg.HostSpray {
return ihttp.BuildHostRequest(pool.ClientType, pool.BaseURL, s)
} else if pool.Mod == pkg.PathSpray {
return ihttp.BuildPathRequest(pool.ClientType, pool.BaseURL, s)
}
return nil, fmt.Errorf("unknown mod")
}
func (pool *Pool) Run(ctx context.Context, offset, limit int) {
pool.worder.RunWithRules()
if pool.Active {
pool.wg.Add(1)
go pool.doActive()
}
if pool.Bak {
pool.wg.Add(1)
go pool.doBak()
}
if pool.Common {
pool.wg.Add(1)
go pool.doCommonFile()
}
closeCh := make(chan struct{})
var worderDone bool
wait := func() {
if !worderDone {
worderDone = true
pool.wg.Wait()
close(closeCh)
}
}
Loop:
for {
select {
case u, ok := <-pool.worder.C:
if !ok {
go wait()
continue
}
pool.Statistor.End++
if pool.reqCount < offset {
pool.reqCount++
continue
}
if pool.Statistor.End > limit {
go wait()
continue
}
pool.wg.Add(1)
pool.reqPool.Invoke(newUnit(u, WordSource))
case source := <-pool.checkCh:
pool.Statistor.CheckNumber++
if pool.Mod == pkg.HostSpray {
pool.reqPool.Invoke(newUnit(pkg.RandHost(), source))
} else if pool.Mod == pkg.PathSpray {
pool.reqPool.Invoke(newUnit(safePath(pool.BaseURL, pkg.RandPath()), source))
}
case unit, ok := <-pool.additionCh:
if !ok {
continue
}
pool.reqPool.Invoke(unit)
case <-closeCh:
break Loop
case <-ctx.Done():
break Loop
case <-pool.ctx.Done():
break Loop
}
}
pool.wg.Wait()
pool.Statistor.EndTime = time.Now().Unix()
pool.Close()
}
func (pool *Pool) Invoke(v interface{}) {
atomic.AddInt32(&pool.Statistor.ReqTotal, 1)
unit := v.(*Unit)
req, err := pool.genReq(unit.path)
if err != nil {
logs.Log.Error(err.Error())
return
}
req.SetHeaders(pool.Headers)
start := time.Now()
resp, reqerr := pool.client.Do(pool.ctx, req)
if pool.ClientType == ihttp.FAST {
defer fasthttp.ReleaseResponse(resp.FastResponse)
defer fasthttp.ReleaseRequest(req.FastRequest)
}
// compare与各种错误处理
var bl *pkg.Baseline
if reqerr != nil && reqerr != fasthttp.ErrBodyTooLarge {
pool.failedCount++
atomic.AddInt32(&pool.Statistor.FailedNumber, 1)
bl = &pkg.Baseline{UrlString: pool.BaseURL + unit.path, IsValid: false, ErrString: reqerr.Error(), Reason: ErrRequestFailed.Error()}
pool.failedBaselines = append(pool.failedBaselines, bl)
} else {
if unit.source <= 3 || unit.source == CrawlSource || unit.source == CommonFileSource {
bl = pkg.NewBaseline(req.URI(), req.Host(), resp)
} else {
if pool.MatchExpr != nil {
// 如果非wordsource, 或自定义了match函数, 则所有数据送入tempch中
bl = pkg.NewBaseline(req.URI(), req.Host(), resp)
} else if err = pool.PreCompare(resp); err == nil {
// 通过预对比跳过一些无用数据, 减少性能消耗
bl = pkg.NewBaseline(req.URI(), req.Host(), resp)
if err != ErrRedirect && bl.RedirectURL != "" {
if bl.RedirectURL != "" && !strings.HasPrefix(bl.RedirectURL, "http") {
bl.RedirectURL = "/" + strings.TrimLeft(bl.RedirectURL, "/")
bl.RedirectURL = pool.BaseURL + bl.RedirectURL
}
pool.wg.Add(1)
pool.doRedirect(bl, unit.depth)
}
} else {
bl = pkg.NewInvalidBaseline(req.URI(), req.Host(), resp, err.Error())
}
}
}
if ihttp.DefaultMaxBodySize != 0 && bl.BodyLength > ihttp.DefaultMaxBodySize {
bl.ExceedLength = true
}
bl.Source = unit.source
bl.ReqDepth = unit.depth
bl.Spended = time.Since(start).Milliseconds()
switch unit.source {
case InitRandomSource:
bl.Collect()
pool.locker.Lock()
pool.random = bl
pool.addFuzzyBaseline(bl)
pool.locker.Unlock()
pool.initwg.Done()
case InitIndexSource:
bl.Collect()
pool.locker.Lock()
pool.index = bl
pool.locker.Unlock()
pool.wg.Add(1)
pool.doCrawl(bl)
if bl.Status == 200 || (bl.Status/100) == 3 {
pool.OutputCh <- bl
}
pool.initwg.Done()
case CheckSource:
if bl.ErrString != "" {
logs.Log.Warnf("[check.error] %s maybe ip had banned, break (%d/%d), error: %s", pool.BaseURL, pool.failedCount, pool.BreakThreshold, bl.ErrString)
} else if i := pool.random.Compare(bl); i < 1 {
if i == 0 {
if pool.Fuzzy {
logs.Log.Warn("[check.fuzzy] maybe trigger risk control, " + bl.String())
}
} else {
pool.failedCount += 2
logs.Log.Warn("[check.failed] maybe trigger risk control, " + bl.String())
pool.failedBaselines = append(pool.failedBaselines, bl)
}
} else {
pool.resetFailed() // 如果后续访问正常, 重置错误次数
logs.Log.Debug("[check.pass] " + bl.String())
}
case WordSource:
// 异步进行性能消耗较大的深度对比
pool.tempCh <- bl
pool.reqCount++
if pool.reqCount%pool.CheckPeriod == 0 {
pool.reqCount++
pool.doCheck()
} else if pool.failedCount%pool.ErrPeriod == 0 {
pool.failedCount++
pool.doCheck()
}
pool.bar.Done()
case RedirectSource:
bl.FrontURL = unit.frontUrl
pool.tempCh <- bl
default:
pool.tempCh <- bl
}
}
func (pool *Pool) PreCompare(resp *ihttp.Response) error {
status := resp.StatusCode()
if pkg.IntsContains(WhiteStatus, status) {
// 如果为白名单状态码则直接返回
return nil
}
if pool.random != nil && pool.random.Status != 200 && pool.random.Status == status {
return ErrSameStatus
}
if pkg.IntsContains(BlackStatus, status) {
return ErrBadStatus
}
if pkg.IntsContains(WAFStatus, status) {
return ErrWaf
}
if !pool.checkRedirect(resp.GetHeader("Location")) {
return ErrRedirect
}
return nil
}
func (pool *Pool) BaseCompare(bl *pkg.Baseline) bool {
if !bl.IsValid {
return false
}
var status = -1
base, ok := pool.baselines[bl.Status] // 挑选对应状态码的baseline进行compare
if !ok {
if pool.random.Status == bl.Status {
// 当other的状态码与base相同时, 会使用base
ok = true
base = pool.random
} else if pool.index.Status == bl.Status {
// 当other的状态码与index相同时, 会使用index
ok = true
base = pool.index
}
}
if ok {
if status = base.Compare(bl); status == 1 {
bl.Reason = ErrCompareFailed.Error()
return false
}
}
bl.Collect()
//if !pool.IgnoreWaf {
// // 部分情况下waf的特征可能是全局, 指定了--ignore-waf则不会进行waf的指纹检测
// for _, f := range bl.Frameworks {
// if f.HasTag("waf") {
// pool.Statistor.WafedNumber++
// bl.Reason = ErrWaf.Error()
// return false
// }
// }
//}
if ok && status == 0 && base.FuzzyCompare(bl) {
pool.Statistor.FuzzyNumber++
bl.Reason = ErrFuzzyCompareFailed.Error()
pool.putToFuzzy(bl)
return false
}
return true
}
func CompareWithExpr(exp *vm.Program, params map[string]interface{}) bool {
res, err := expr.Run(exp, params)
if err != nil {
logs.Log.Warn(err.Error())
}
if res == true {
return true
} else {
return false
}
}
func (pool *Pool) doRedirect(bl *pkg.Baseline, depth int) {
defer pool.wg.Done()
if depth >= maxRedirect {
return
}
if uu, err := url.Parse(bl.RedirectURL); err == nil && uu.Hostname() == pool.index.Url.Hostname() {
pool.wg.Add(1)
go pool.addAddition(&Unit{
path: uu.Path,
source: RedirectSource,
frontUrl: bl.UrlString,
depth: depth + 1,
})
}
}
func (pool *Pool) doCrawl(bl *pkg.Baseline) {
if !pool.Crawl {
pool.wg.Done()
return
}
bl.CollectURL()
if bl.URLs == nil {
pool.wg.Done()
return
}
go func() {
defer pool.wg.Done()
for _, u := range bl.URLs {
if strings.HasPrefix(u, "//") {
parsed, err := url.Parse(u)
if err != nil {
continue
}
if parsed.Host != bl.Url.Host {
continue
}
u = parsed.Path
} else if strings.HasPrefix(u, "/") {
// 绝对目录拼接
// 不需要进行处理, 用来跳过下面的判断
} else if strings.HasPrefix(u, "./") {
// "./"相对目录拼接
if bl.Dir {
u = pkg.URLJoin(bl.Url.Path, u[2:])
} else {
u = pkg.URLJoin(path.Dir(bl.Url.Path), u[2:])
}
} else if strings.HasPrefix(u, "../") {
u = path.Join(path.Dir(bl.Url.Path), u)
} else if !strings.HasPrefix(u, "http") {
// 相对目录拼接
if bl.Dir {
u = pkg.URLJoin(bl.Url.Path, u)
} else {
u = pkg.URLJoin(path.Dir(bl.Url.Path), u)
}
} else {
parsed, err := url.Parse(u)
if err != nil {
continue
}
if parsed.Host != bl.Url.Host {
continue
}
}
if _, ok := pool.urls[u]; ok {
pool.urls[u]++
} else {
// 通过map去重, 只有新的url才会进入到该逻辑
pool.locker.Lock()
pool.urls[u] = 1
pool.locker.Unlock()
if bl.ReqDepth < maxCrawl {
pool.wg.Add(1)
pool.addAddition(&Unit{
path: u[1:],
source: CrawlSource,
depth: bl.ReqDepth + 1,
})
}
}
}
}()
}
func (pool *Pool) doRule(bl *pkg.Baseline) {
if pool.AppendRule == nil {
pool.wg.Done()
return
}
if bl.Source == int(RuleSource) || bl.Dir {
pool.wg.Done()
return
}
go func() {
defer pool.wg.Done()
for u := range rule.RunAsStream(pool.AppendRule.Expressions, path.Base(bl.Path)) {
pool.wg.Add(1)
pool.addAddition(&Unit{
path: path.Join(path.Dir(bl.Path), u),
source: RuleSource,
})
}
}()
}
func (pool *Pool) doActive() {
defer pool.wg.Done()
for _, u := range pkg.ActivePath {
pool.wg.Add(1)
pool.addAddition(&Unit{
path: safePath(pool.BaseURL, u),
source: ActiveSource,
})
}
}
func (pool *Pool) doBak() {
defer pool.wg.Done()
u, err := url.Parse(pool.BaseURL)
if err != nil {
return
}
worder, err := words.NewWorderWithDsl("{?0}.{@bak_ext}", [][]string{pkg.BakGenerator(u.Host)}, nil)
if err != nil {
return
}
worder.Run()
for w := range worder.C {
pool.wg.Add(1)
pool.addAddition(&Unit{
path: safePath(pool.BaseURL, w),
source: BakSource,
})
}
worder, err = words.NewWorderWithDsl("{@bak_name}.{@bak_ext}", nil, nil)
if err != nil {
return
}
worder.Run()
for w := range worder.C {
pool.wg.Add(1)
pool.addAddition(&Unit{
path: safePath(pool.BaseURL, w),
source: BakSource,
})
}
}
func (pool *Pool) doCommonFile() {
defer pool.wg.Done()
for _, u := range mask.SpecialWords["common_file"] {
pool.wg.Add(1)
pool.addAddition(&Unit{
path: safePath(pool.BaseURL, u),
source: CommonFileSource,
})
}
}
func (pool *Pool) doCheck() {
if pool.failedCount > pool.BreakThreshold {
// 当报错次数超过上限是, 结束任务
pool.recover()
pool.cancel()
pool.isFailed = true
return
}
if pool.Mod == pkg.HostSpray {
pool.checkCh <- CheckSource
} else if pool.Mod == pkg.PathSpray {
pool.checkCh <- CheckSource
}
}
func (pool *Pool) addAddition(u *Unit) {
pool.additionCh <- u
}
func (pool *Pool) addFuzzyBaseline(bl *pkg.Baseline) {
if _, ok := pool.baselines[bl.Status]; !ok && pkg.IntsContains(FuzzyStatus, bl.Status) {
bl.Collect()
pool.wg.Add(1)
pool.doCrawl(bl)
pool.baselines[bl.Status] = bl
logs.Log.Infof("[baseline.%dinit] %s", bl.Status, bl.Format([]string{"status", "length", "spend", "title", "frame", "redirect"}))
}
}
func (pool *Pool) putToInvalid(bl *pkg.Baseline, reason string) {
bl.IsValid = false
pool.OutputCh <- bl
}
func (pool *Pool) putToFuzzy(bl *pkg.Baseline) {
bl.IsFuzzy = true
pool.FuzzyCh <- bl
}
func (pool *Pool) resetFailed() {
pool.failedCount = 1
pool.failedBaselines = nil
}
func (pool *Pool) recover() {
logs.Log.Errorf("%s ,failed request exceeds the threshold , task will exit. Breakpoint %d", pool.BaseURL, pool.reqCount)
for i, bl := range pool.failedBaselines {
logs.Log.Errorf("[failed.%d] %s", i, bl.String())
}
}
func (pool *Pool) Close() {
for pool.analyzeDone {
time.Sleep(time.Duration(100) * time.Millisecond)
}
close(pool.tempCh)
close(pool.additionCh)
pool.bar.Close()
}

422
internal/runner.go Normal file
View File

@ -0,0 +1,422 @@
package internal
import (
"context"
"fmt"
"github.com/antonmedv/expr/vm"
"github.com/chainreactors/files"
"github.com/chainreactors/logs"
"github.com/chainreactors/spray/pkg"
"github.com/chainreactors/spray/pkg/ihttp"
"github.com/chainreactors/words"
"github.com/chainreactors/words/rule"
"github.com/gosuri/uiprogress"
"github.com/panjf2000/ants/v2"
"sync"
"time"
)
var (
WhiteStatus = []int{200}
BlackStatus = []int{400, 410}
FuzzyStatus = []int{403, 404, 500, 501, 502, 503}
WAFStatus = []int{493, 418}
)
var (
dictCache = make(map[string][]string)
wordlistCache = make(map[string][]string)
ruleCache = make(map[string][]rule.Expression)
)
type Runner struct {
taskCh chan *Task
poolwg sync.WaitGroup
bar *uiprogress.Bar
finished int
Tasks []*Task
URLList []string
Wordlist []string
Rules *rule.Program
AppendRules *rule.Program
Headers map[string]string
Fns []func(string) string
FilterExpr *vm.Program
MatchExpr *vm.Program
RecursiveExpr *vm.Program
RecuDepth int
Threads int
PoolSize int
Pools *ants.PoolWithFunc
PoolName map[string]bool
Timeout int
Mod string
Probes []string
OutputCh chan *pkg.Baseline
FuzzyCh chan *pkg.Baseline
Fuzzy bool
OutputFile *files.File
FuzzyFile *files.File
DumpFile *files.File
StatFile *files.File
Progress *uiprogress.Progress
Offset int
Limit int
Total int
Deadline int
CheckPeriod int
ErrPeriod int
BreakThreshold int
Color bool
CheckOnly bool
Force bool
IgnoreWaf bool
Crawl bool
Active bool
Bak bool
Common bool
}
func (r *Runner) PrepareConfig() *pkg.Config {
config := &pkg.Config{
Thread: r.Threads,
Timeout: r.Timeout,
Headers: r.Headers,
Mod: pkg.ModMap[r.Mod],
OutputCh: r.OutputCh,
FuzzyCh: r.FuzzyCh,
Fuzzy: r.Fuzzy,
CheckPeriod: r.CheckPeriod,
ErrPeriod: r.ErrPeriod,
BreakThreshold: r.BreakThreshold,
MatchExpr: r.MatchExpr,
FilterExpr: r.FilterExpr,
RecuExpr: r.RecursiveExpr,
AppendRule: r.AppendRules,
IgnoreWaf: r.IgnoreWaf,
Crawl: r.Crawl,
Active: r.Active,
Bak: r.Bak,
Common: r.Common,
}
if config.Mod == pkg.PathSpray {
config.ClientType = ihttp.FAST
} else if config.Mod == pkg.HostSpray {
config.ClientType = ihttp.STANDARD
}
return config
}
func (r *Runner) Prepare(ctx context.Context) error {
var err error
if r.CheckOnly {
// 仅check, 类似httpx
r.Pools, err = ants.NewPoolWithFunc(1, func(i interface{}) {
config := r.PrepareConfig()
pool, err := NewCheckPool(ctx, config)
if err != nil {
logs.Log.Error(err.Error())
pool.cancel()
r.poolwg.Done()
return
}
pool.worder = words.NewWorder(r.URLList)
pool.worder.Fns = r.Fns
pool.bar = pkg.NewBar("check", r.Total-r.Offset, r.Progress)
pool.Run(ctx, r.Offset, r.Total)
r.poolwg.Done()
})
} else {
go func() {
for _, t := range r.Tasks {
r.taskCh <- t
}
close(r.taskCh)
}()
if len(r.Tasks) > 0 {
r.bar = r.Progress.AddBar(len(r.Tasks))
r.bar.PrependCompleted()
r.bar.PrependFunc(func(b *uiprogress.Bar) string {
return fmt.Sprintf("total progressive: %d/%d ", r.finished, len(r.Tasks))
})
r.bar.AppendElapsed()
}
r.Pools, err = ants.NewPoolWithFunc(r.PoolSize, func(i interface{}) {
t := i.(*Task)
if t.origin != nil && t.origin.End == t.origin.Total {
r.StatFile.SafeWrite(t.origin.Json())
r.Done()
return
}
config := r.PrepareConfig()
config.BaseURL = t.baseUrl
pool, err := NewPool(ctx, config)
if err != nil {
logs.Log.Error(err.Error())
pool.cancel()
r.Done()
return
}
if t.origin != nil && len(r.Wordlist) == 0 {
// 如果是从断点续传中恢复的任务, 则自动设置word,dict与rule, 不过优先级低于命令行参数
pool.Statistor = pkg.NewStatistorFromStat(t.origin)
wl, err := loadWordlist(t.origin.Word, t.origin.Dictionaries)
if err != nil {
logs.Log.Error(err.Error())
r.Done()
return
}
pool.worder = words.NewWorder(wl)
pool.worder.Fns = r.Fns
rules, err := loadRuleWithFiles(t.origin.RuleFiles, t.origin.RuleFilter)
if err != nil {
logs.Log.Error(err.Error())
r.Done()
return
}
pool.worder.Rules = rules
if len(rules) > 0 {
pool.Statistor.Total = len(rules) * len(wl)
} else {
pool.Statistor.Total = len(wl)
}
} else {
pool.Statistor = pkg.NewStatistor(t.baseUrl)
pool.worder = words.NewWorder(r.Wordlist)
pool.worder.Fns = r.Fns
pool.worder.Rules = r.Rules.Expressions
}
var limit int
if pool.Statistor.Total > r.Limit && r.Limit != 0 {
limit = r.Limit
} else {
limit = pool.Statistor.Total
}
pool.bar = pkg.NewBar(config.BaseURL, limit-pool.Statistor.Offset, r.Progress)
err = pool.Init()
if err != nil {
logs.Log.Error(err.Error())
if !r.Force {
// 如果没开启force, init失败将会关闭pool
pool.cancel()
r.Done()
return
}
}
pool.Run(ctx, pool.Statistor.Offset, limit)
if pool.isFailed && len(pool.failedBaselines) > 0 {
// 如果因为错误积累退出, end将指向第一个错误发生时, 防止resume时跳过大量目标
pool.Statistor.End = pool.failedBaselines[0].Number
}
if r.Color {
logs.Log.Important(pool.Statistor.ColorString())
logs.Log.Important(pool.Statistor.ColorDetail())
} else {
logs.Log.Important(pool.Statistor.String())
logs.Log.Important(pool.Statistor.Detail())
}
if r.StatFile != nil {
r.StatFile.SafeWrite(pool.Statistor.Json())
r.StatFile.SafeSync()
}
r.Done()
})
}
if err != nil {
return err
}
r.Outputting()
return nil
}
func (r *Runner) AddPool(task *Task) {
if _, ok := r.PoolName[task.baseUrl]; ok {
logs.Log.Importantf("already added pool, skip %s", task.baseUrl)
return
}
task.depth++
r.poolwg.Add(1)
r.Pools.Invoke(task)
}
func (r *Runner) Run(ctx context.Context) {
Loop:
for {
select {
case <-ctx.Done():
for t := range r.taskCh {
stat := pkg.NewStatistor(t.baseUrl)
r.StatFile.SafeWrite(stat.Json())
}
logs.Log.Importantf("save all stat to %s", r.StatFile.Filename)
break Loop
case t, ok := <-r.taskCh:
if !ok {
break Loop
}
r.AddPool(t)
}
}
r.poolwg.Wait()
//time.Sleep(100 * time.Millisecond) // 延迟100ms, 等所有数据处理完毕
for {
if len(r.OutputCh) == 0 {
close(r.OutputCh)
break
}
}
for {
if len(r.FuzzyCh) == 0 {
close(r.FuzzyCh)
break
}
}
time.Sleep(100 * time.Millisecond) // 延迟100ms, 等所有数据处理完毕
}
func (r *Runner) RunWithCheck(ctx context.Context) {
stopCh := make(chan struct{})
r.poolwg.Add(1)
err := r.Pools.Invoke(struct{}{})
if err != nil {
return
}
go func() {
r.poolwg.Wait()
stopCh <- struct{}{}
}()
Loop:
for {
select {
case <-ctx.Done():
logs.Log.Error("cancel with deadline")
break Loop
case <-stopCh:
break Loop
}
}
for {
if len(r.OutputCh) == 0 {
close(r.OutputCh)
break
}
}
time.Sleep(100 * time.Millisecond) // 延迟100ms, 等所有数据处理完毕
}
func (r *Runner) Done() {
r.bar.Incr()
r.finished++
r.poolwg.Done()
}
func (r *Runner) Outputting() {
debugPrint := func(bl *pkg.Baseline) {
if r.Color {
logs.Log.Debug(bl.ColorString())
} else {
logs.Log.Debug(bl.String())
}
}
go func() {
var saveFunc func(*pkg.Baseline)
if r.OutputFile != nil {
saveFunc = func(bl *pkg.Baseline) {
r.OutputFile.SafeWrite(bl.Jsonify() + "\n")
r.OutputFile.SafeSync()
}
} else {
if len(r.Probes) > 0 {
if r.Color {
saveFunc = func(bl *pkg.Baseline) {
logs.Log.Console(logs.GreenBold("[+] " + bl.Format(r.Probes) + "\n"))
}
} else {
saveFunc = func(bl *pkg.Baseline) {
logs.Log.Console("[+] " + bl.Format(r.Probes) + "\n")
}
}
} else {
if r.Color {
saveFunc = func(bl *pkg.Baseline) {
logs.Log.Console(logs.GreenBold("[+] " + bl.ColorString() + "\n"))
}
} else {
saveFunc = func(bl *pkg.Baseline) {
logs.Log.Console("[+] " + bl.String() + "\n")
}
}
}
}
for {
select {
case bl, ok := <-r.OutputCh:
if !ok {
return
}
if r.DumpFile != nil {
r.DumpFile.SafeWrite(bl.Jsonify() + "\n")
r.DumpFile.SafeSync()
}
if bl.IsValid {
saveFunc(bl)
if bl.Recu {
r.AddPool(&Task{baseUrl: bl.UrlString, depth: bl.RecuDepth + 1})
}
} else {
debugPrint(bl)
}
}
}
}()
go func() {
var fuzzySaveFunc func(*pkg.Baseline)
if r.FuzzyFile != nil {
fuzzySaveFunc = func(bl *pkg.Baseline) {
r.FuzzyFile.SafeWrite(bl.Jsonify() + "\n")
}
} else {
if r.Color {
fuzzySaveFunc = func(bl *pkg.Baseline) {
logs.Log.Console(logs.GreenBold("[fuzzy] " + bl.ColorString() + "\n"))
}
} else {
fuzzySaveFunc = func(bl *pkg.Baseline) {
logs.Log.Console("[fuzzy] " + bl.String() + "\n")
}
}
}
for {
select {
case bl, ok := <-r.FuzzyCh:
if !ok {
return
}
if r.Fuzzy {
fuzzySaveFunc(bl)
} else {
debugPrint(bl)
}
}
}
}()
}

77
internal/types.go Normal file
View File

@ -0,0 +1,77 @@
package internal
import (
"github.com/chainreactors/spray/pkg"
"github.com/chainreactors/words/rule"
)
type ErrorType uint
const (
ErrBadStatus ErrorType = iota
ErrSameStatus
ErrRequestFailed
ErrWaf
ErrRedirect
ErrCompareFailed
ErrFuzzyCompareFailed
ErrCustomCompareFailed
ErrCustomFilter
)
func (e ErrorType) Error() string {
switch e {
case ErrBadStatus:
return "blacklist status"
case ErrSameStatus:
return "same status with random baseline"
case ErrRequestFailed:
return "request failed"
case ErrWaf:
return "maybe banned by waf"
case ErrRedirect:
return "duplicate redirect url"
case ErrCompareFailed:
return "compare failed"
case ErrFuzzyCompareFailed:
return "fuzzy compare failed"
case ErrCustomCompareFailed:
return "custom compare failed"
case ErrCustomFilter:
return "custom filtered"
default:
return "unknown error"
}
}
const (
CheckSource = iota + 1
InitRandomSource
InitIndexSource
RedirectSource
CrawlSource
ActiveSource
WordSource
WafSource
RuleSource
BakSource
CommonFileSource
)
func newUnit(path string, source int) *Unit {
return &Unit{path: path, source: source}
}
type Unit struct {
path string
source int
frontUrl string
depth int // redirect depth
}
type Task struct {
baseUrl string
depth int
rule []rule.Expression
origin *pkg.Statistor
}

120
internal/utils.go Normal file
View File

@ -0,0 +1,120 @@
package internal
import (
"bytes"
"github.com/chainreactors/spray/pkg"
"github.com/chainreactors/words/mask"
"github.com/chainreactors/words/rule"
"io/ioutil"
"strings"
)
func parseExtension(s string) string {
if i := strings.Index(s, "."); i != -1 {
return s[i+1:]
}
return ""
}
func loadFileToSlice(filename string) ([]string, error) {
var ss []string
content, err := ioutil.ReadFile(filename)
if err != nil {
return nil, err
}
ss = strings.Split(strings.TrimSpace(string(content)), "\n")
// 统一windows与linux的回车换行差异
for i, word := range ss {
ss[i] = strings.TrimSpace(word)
}
return ss, nil
}
func loadFileAndCombine(filename []string) (string, error) {
var bs bytes.Buffer
for _, f := range filename {
if data, ok := pkg.Rules[f]; ok {
bs.WriteString(strings.TrimSpace(data))
bs.WriteString("\n")
} else {
content, err := ioutil.ReadFile(f)
if err != nil {
return "", err
}
bs.Write(bytes.TrimSpace(content))
bs.WriteString("\n")
}
}
return bs.String(), nil
}
func loadFileWithCache(filename string) ([]string, error) {
if dict, ok := dictCache[filename]; ok {
return dict, nil
}
dict, err := loadFileToSlice(filename)
if err != nil {
return nil, err
}
dictCache[filename] = dict
return dict, nil
}
func loadDictionaries(filenames []string) ([][]string, error) {
dicts := make([][]string, len(filenames))
for i, name := range filenames {
dict, err := loadFileWithCache(name)
if err != nil {
return nil, err
}
dicts[i] = dict
}
return dicts, nil
}
func loadWordlist(word string, dictNames []string) ([]string, error) {
if wl, ok := wordlistCache[word+strings.Join(dictNames, ",")]; ok {
return wl, nil
}
dicts, err := loadDictionaries(dictNames)
if err != nil {
return nil, err
}
wl, err := mask.Run(word, dicts, nil)
if err != nil {
return nil, err
}
wordlistCache[word] = wl
return wl, nil
}
func loadRuleWithFiles(ruleFiles []string, filter string) ([]rule.Expression, error) {
if rules, ok := ruleCache[strings.Join(ruleFiles, ",")]; ok {
return rules, nil
}
var rules bytes.Buffer
for _, filename := range ruleFiles {
content, err := ioutil.ReadFile(filename)
if err != nil {
return nil, err
}
rules.Write(content)
rules.WriteString("\n")
}
return rule.Compile(rules.String(), filter).Expressions, nil
}
func safePath(url, path string) string {
urlSlash := strings.HasSuffix(url, "/")
pathSlash := strings.HasPrefix(path, "/")
if !urlSlash && !pathSlash {
return "/" + path
} else if urlSlash && pathSlash {
return path[1:]
} else {
return path
}
}

View File

@ -2,60 +2,44 @@ package pkg
import (
"fmt"
"github.com/vbauerster/mpb/v8"
"github.com/vbauerster/mpb/v8/decor"
"time"
"github.com/chainreactors/go-metrics"
"github.com/gosuri/uiprogress"
)
func NewBar(u string, total int, stat *Statistor, p *mpb.Progress) *Bar {
if p == nil {
return &Bar{
url: u,
}
}
bar := p.AddBar(int64(total),
mpb.BarFillerClearOnComplete(),
mpb.BarRemoveOnComplete(),
mpb.PrependDecorators(
decor.Name(u, decor.WC{W: len(u) + 1, C: decor.DindentRight}), // 这里调整了装饰器的参数
decor.NewAverageSpeed(0, "% .0f/s ", time.Now()),
decor.Counters(0, "%d/%d"),
decor.Any(func(s decor.Statistics) string {
return fmt.Sprintf(" found: %d", stat.FoundNumber)
}),
),
mpb.AppendDecorators(
decor.Percentage(),
decor.Elapsed(decor.ET_STYLE_GO, decor.WC{W: 4}),
),
)
return &Bar{
func NewBar(u string, total int, progress *uiprogress.Progress) *Bar {
bar := &Bar{
Bar: progress.AddBar(total),
url: u,
bar: bar,
//m: m,
m: metrics.NewMeter(),
}
metrics.Register(bar.url, bar.m)
bar.PrependCompleted()
bar.PrependFunc(func(b *uiprogress.Bar) string {
return fmt.Sprintf("%f/s %d/%d", bar.m.Rate1(), bar.m.Count(), bar.Bar.Total)
})
bar.PrependFunc(func(b *uiprogress.Bar) string {
return u
})
bar.AppendElapsed()
return bar
}
type Bar struct {
url string
bar *mpb.Bar
//m metrics.Meter
url string
total int
close bool
*uiprogress.Bar
m metrics.Meter
}
func (bar *Bar) Done() {
//bar.m.Mark(1)
if bar.bar == nil {
return
}
bar.bar.Increment()
bar.m.Mark(1)
bar.Incr()
}
func (bar *Bar) Close() {
//metrics.Unregister(bar.url)
// 标记进度条为完成状态
if bar.bar == nil {
return
}
bar.bar.Abort(true)
metrics.Unregister(bar.url)
bar.close = true
}

452
pkg/baseline.go Normal file
View File

@ -0,0 +1,452 @@
package pkg
import (
"bytes"
"encoding/json"
"github.com/chainreactors/gogo/v2/pkg/fingers"
"github.com/chainreactors/gogo/v2/pkg/utils"
"github.com/chainreactors/logs"
"github.com/chainreactors/parsers"
"github.com/chainreactors/spray/pkg/ihttp"
"net/url"
"strconv"
"strings"
)
func GetSourceName(s int) string {
switch s {
case 1:
return "check"
case 2:
return "random"
case 3:
return "index"
case 4:
return "redirect"
case 5:
return "crawl"
case 6:
return "active"
case 7:
return "word"
case 8:
return "waf"
case 9:
return "rule"
case 10:
return "bak"
case 11:
return "common"
default:
return "unknown"
}
}
func NewBaseline(u, host string, resp *ihttp.Response) *Baseline {
bl := &Baseline{
UrlString: u,
Status: resp.StatusCode(),
IsValid: true,
}
uu, err := url.Parse(u)
if err == nil {
bl.Path = uu.Path
bl.Url = uu
}
bl.Dir = bl.IsDir()
if resp.ClientType == ihttp.STANDARD {
bl.Host = host
}
header := resp.Header()
bl.Header = make([]byte, len(header))
copy(bl.Header, header)
bl.HeaderLength = len(bl.Header)
body := resp.Body()
bl.Body = make([]byte, len(body))
copy(bl.Body, body)
bl.BodyLength = resp.ContentLength()
if bl.BodyLength == -1 {
bl.BodyLength = len(bl.Body)
}
if t, ok := ContentTypeMap[resp.ContentType()]; ok {
bl.ContentType = t
bl.Title = t + " data"
} else {
bl.ContentType = "other"
}
bl.Raw = append(bl.Header, bl.Body...)
bl.RedirectURL = resp.GetHeader("Location")
return bl
}
func NewInvalidBaseline(u, host string, resp *ihttp.Response, reason string) *Baseline {
bl := &Baseline{
UrlString: u,
Status: resp.StatusCode(),
IsValid: false,
Reason: reason,
}
uu, err := url.Parse(u)
if err == nil {
bl.Path = uu.Path
bl.Url = uu
}
bl.Dir = bl.IsDir()
if resp.ClientType == ihttp.STANDARD {
bl.Host = host
}
// 无效数据也要读取body, 否则keep-alive不生效
resp.Body()
bl.BodyLength = resp.ContentLength()
bl.RedirectURL = string(resp.GetHeader("Location"))
return bl
}
type Baseline struct {
Number int `json:"number"`
Url *url.URL `json:"-"`
UrlString string `json:"url"`
Path string `json:"path"`
Dir bool `json:"isdir"`
Host string `json:"host"`
Body []byte `json:"-"`
BodyLength int `json:"body_length"`
ExceedLength bool `json:"-"`
Header []byte `json:"-"`
Raw []byte `json:"-"`
HeaderLength int `json:"header_length"`
RedirectURL string `json:"redirect_url,omitempty"`
FrontURL string `json:"front_url,omitempty"`
Status int `json:"status"`
Spended int64 `json:"spend"` // 耗时, 毫秒
ContentType string `json:"content_type"`
Title string `json:"title"`
Frameworks Frameworks `json:"frameworks"`
Extracteds Extracteds `json:"extracts"`
ErrString string `json:"error"`
Reason string `json:"reason"`
IsValid bool `json:"valid"`
IsFuzzy bool `json:"fuzzy"`
Source int `json:"source"`
ReqDepth int `json:"depth"`
Distance uint8 `json:"distance"`
Recu bool `json:"-"`
RecuDepth int `json:"-"`
URLs []string `json:"-"`
*parsers.Hashes `json:"hashes"`
}
func (bl *Baseline) IsDir() bool {
if strings.HasSuffix(bl.Path, "/") {
return true
}
return false
}
// Collect 深度收集信息
func (bl *Baseline) Collect() {
bl.Frameworks = FingerDetect(string(bl.Raw))
if len(bl.Body) > 0 {
if bl.ContentType == "html" {
bl.Title = utils.AsciiEncode(parsers.MatchTitle(string(bl.Body)))
} else if bl.ContentType == "ico" {
if name, ok := Md5Fingers[parsers.Md5Hash(bl.Body)]; ok {
bl.Frameworks = append(bl.Frameworks, &parsers.Framework{Name: name})
} else if name, ok := Mmh3Fingers[parsers.Mmh3Hash32(bl.Body)]; ok {
bl.Frameworks = append(bl.Frameworks, &parsers.Framework{Name: name})
}
}
}
bl.Hashes = parsers.NewHashes(bl.Raw)
bl.Extracteds = Extractors.Extract(string(bl.Raw))
}
func (bl *Baseline) CollectURL() {
if len(bl.Body) == 0 {
return
}
for _, reg := range JSRegexps {
urls := reg.FindAllStringSubmatch(string(bl.Body), -1)
for _, u := range urls {
if !filterJs(u[1]) {
bl.URLs = append(bl.URLs, formatURL(u[1]))
}
}
}
for _, reg := range URLRegexps {
urls := reg.FindAllStringSubmatch(string(bl.Body), -1)
for _, u := range urls {
if !filterUrl(u[1]) {
bl.URLs = append(bl.URLs, formatURL(u[1]))
}
}
}
if bl.URLs != nil {
bl.Extracteds = append(bl.Extracteds, &fingers.Extracted{
Name: "crawl",
ExtractResult: RemoveDuplication(bl.URLs),
})
}
}
// Compare
// if totally equal return 1
// if maybe equal return 0
// not equal return -1
func (bl *Baseline) Compare(other *Baseline) int {
if other.RedirectURL != "" && bl.RedirectURL == other.RedirectURL {
// 如果重定向url不为空, 且与base不相同, 则说明不是同一个页面
return 1
}
if bl.BodyLength == other.BodyLength {
// 如果body length相等且md5相等, 则说明是同一个页面
if bytes.Equal(bl.Body, other.Body) {
// 如果length相等, md5也相等, 则判断为全同
return 1
} else {
// 如果长度相等, 但是md5不相等, 可能是存在csrftoken之类的随机值
return 0
}
} else if i := bl.BodyLength - other.BodyLength; (i < 16 && i > 0) || (i > -16 && i < 0) {
// 如果body length绝对值小于16, 则可能是存在csrftoken之类的随机值, 需要模糊判断
return 0
} else {
// 如果body length绝对值大于16, 则认为大概率存在较大差异
if strings.Contains(string(other.Body), other.Path) {
// 如果包含路径本身, 可能是路径自身的随机值影响结果
return 0
} else {
// 如果不包含路径本身, 则认为是不同页面
return -1
}
}
return -1
}
var Distance uint8 = 5 // 数字越小越相似, 数字为0则为完全一致.
func (bl *Baseline) FuzzyCompare(other *Baseline) bool {
// 这里使用rawsimhash, 是为了保证一定数量的字符串, 否则超短的body会导致simhash偏差指较大
if other.Distance = parsers.SimhashCompare(other.RawSimhash, bl.RawSimhash); other.Distance < Distance {
return true
}
return false
}
func (bl *Baseline) Get(key string) string {
switch key {
case "url":
return bl.UrlString
case "host":
return bl.Host
case "content_type", "type":
return bl.ContentType
case "title":
return bl.Title
case "redirect":
return bl.RedirectURL
case "md5":
if bl.Hashes != nil {
return bl.Hashes.BodyMd5
} else {
return ""
}
case "simhash":
if bl.Hashes != nil {
return bl.Hashes.BodySimhash
} else {
return ""
}
case "mmh3":
if bl.Hashes != nil {
return bl.Hashes.BodySimhash
} else {
return ""
}
case "stat", "status":
return strconv.Itoa(bl.Status)
case "spend":
return strconv.Itoa(int(bl.Spended)) + "ms"
case "length":
return strconv.Itoa(bl.BodyLength)
case "sim", "distance":
return "sim:" + strconv.Itoa(int(bl.Distance))
case "source":
return GetSourceName(bl.Source)
case "extract":
return bl.Extracteds.String()
case "frame", "framework":
return bl.Frameworks.String()
case "full":
return bl.String()
default:
return ""
}
}
func (bl *Baseline) Additional(key string) string {
if key == "frame" || key == "extract" {
return bl.Get(key)
} else if v := bl.Get(key); v != "" {
return " [" + v + "]"
} else {
return ""
}
}
func (bl *Baseline) Format(probes []string) string {
var line strings.Builder
if bl.FrontURL != "" {
line.WriteString("\t")
line.WriteString(bl.FrontURL)
line.WriteString(" -> ")
}
line.WriteString(bl.UrlString)
if bl.Host != "" {
line.WriteString(" (" + bl.Host + ")")
}
if bl.Reason != "" {
line.WriteString(" ,")
line.WriteString(bl.Reason)
}
if bl.ErrString != "" {
line.WriteString(" ,err: ")
line.WriteString(bl.ErrString)
return line.String()
}
for _, p := range probes {
line.WriteString(" ")
line.WriteString(bl.Additional(p))
}
return line.String()
}
func (bl *Baseline) ColorString() string {
var line strings.Builder
line.WriteString(logs.GreenLine("[" + GetSourceName(bl.Source) + "]"))
if bl.FrontURL != "" {
line.WriteString(logs.CyanLine(bl.FrontURL))
line.WriteString(" --> ")
}
line.WriteString(" ")
line.WriteString(logs.GreenLine(bl.UrlString))
if bl.Host != "" {
line.WriteString(" (" + bl.Host + ")")
}
if bl.Reason != "" {
line.WriteString(" [reason: ")
line.WriteString(logs.YellowBold(bl.Reason))
line.WriteString("]")
}
if bl.ErrString != "" {
line.WriteString(" [err: ")
line.WriteString(logs.RedBold(bl.ErrString))
line.WriteString("]")
return line.String()
}
line.WriteString(" - ")
line.WriteString(logs.GreenBold(strconv.Itoa(bl.Status)))
line.WriteString(" - ")
line.WriteString(logs.YellowBold(strconv.Itoa(bl.BodyLength)))
if bl.ExceedLength {
line.WriteString(logs.Red("(exceed)"))
}
line.WriteString(" - ")
line.WriteString(logs.YellowBold(strconv.Itoa(int(bl.Spended)) + "ms"))
line.WriteString(logs.GreenLine(bl.Additional("title")))
if bl.Distance != 0 {
line.WriteString(logs.GreenLine(bl.Additional("sim")))
}
line.WriteString(logs.Cyan(bl.Frameworks.String()))
line.WriteString(logs.Cyan(bl.Extracteds.String()))
if bl.RedirectURL != "" {
line.WriteString(" --> ")
line.WriteString(logs.CyanLine(bl.RedirectURL))
line.WriteString(" ")
}
if len(bl.Extracteds) > 0 {
for _, e := range bl.Extracteds {
line.WriteString("\n " + e.Name + ": \n\t")
line.WriteString(logs.GreenLine(strings.Join(e.ExtractResult, "\n\t")))
}
}
return line.String()
}
func (bl *Baseline) String() string {
var line strings.Builder
line.WriteString(logs.GreenLine("[" + GetSourceName(bl.Source) + "]"))
if bl.FrontURL != "" {
line.WriteString(bl.FrontURL)
line.WriteString(" --> ")
}
line.WriteString(" ")
line.WriteString(bl.UrlString)
if bl.Host != "" {
line.WriteString(" (" + bl.Host + ")")
}
if bl.Reason != "" {
line.WriteString(" [reason: ")
line.WriteString(bl.Reason)
line.WriteString("]")
}
if bl.ErrString != "" {
line.WriteString(" [err: ")
line.WriteString(bl.ErrString)
line.WriteString("]")
return line.String()
}
line.WriteString(" - ")
line.WriteString(strconv.Itoa(bl.Status))
line.WriteString(" - ")
line.WriteString(strconv.Itoa(bl.BodyLength))
if bl.ExceedLength {
line.WriteString("(exceed)")
}
line.WriteString(" - ")
line.WriteString(strconv.Itoa(int(bl.Spended)) + "ms")
line.WriteString(bl.Additional("title"))
if bl.Distance != 0 {
line.WriteString(logs.GreenLine(bl.Additional("sim")))
}
line.WriteString(bl.Frameworks.String())
line.WriteString(bl.Extracteds.String())
if bl.RedirectURL != "" {
line.WriteString(" --> ")
line.WriteString(bl.RedirectURL)
line.WriteString(" ")
}
if len(bl.Extracteds) > 0 {
for _, e := range bl.Extracteds {
line.WriteString("\n " + e.Name + ": \n\t")
line.WriteString(strings.Join(e.ExtractResult, "\n\t"))
}
}
return line.String()
}
func (bl *Baseline) Jsonify() string {
bs, err := json.Marshal(bl)
if err != nil {
return ""
}
return string(bs)
}

46
pkg/config.go Normal file
View File

@ -0,0 +1,46 @@
package pkg
import (
"github.com/antonmedv/expr/vm"
"github.com/chainreactors/words/rule"
)
type SprayMod int
const (
PathSpray SprayMod = iota + 1
HostSpray
ParamSpray
CustomSpray
)
var ModMap = map[string]SprayMod{
"path": PathSpray,
"host": HostSpray,
}
type Config struct {
BaseURL string
Thread int
Wordlist []string
Timeout int
CheckPeriod int
ErrPeriod int
BreakThreshold int
Method string
Mod SprayMod
Headers map[string]string
ClientType int
MatchExpr *vm.Program
FilterExpr *vm.Program
RecuExpr *vm.Program
AppendRule *rule.Program
OutputCh chan *Baseline
FuzzyCh chan *Baseline
Fuzzy bool
IgnoreWaf bool
Crawl bool
Active bool
Bak bool
Common bool
}

View File

@ -1,41 +0,0 @@
package pkg
type ErrorType uint
const (
NoErr ErrorType = iota
ErrBadStatus
ErrSameStatus
ErrRequestFailed
ErrWaf
ErrRedirect
ErrCompareFailed
ErrCustomCompareFailed
ErrCustomFilter
ErrFuzzyCompareFailed
ErrFuzzyRedirect
ErrFuzzyNotUnique
ErrUrlError
ErrResponseError
)
var ErrMap = map[ErrorType]string{
NoErr: "",
ErrBadStatus: "blacklist status",
ErrSameStatus: "same status with random baseline",
ErrRequestFailed: "request failed",
ErrWaf: "maybe banned by waf",
ErrRedirect: "duplicate redirect url",
ErrCompareFailed: "compare failed",
ErrCustomCompareFailed: "custom compare failed",
ErrCustomFilter: "custom filtered",
ErrFuzzyCompareFailed: "fuzzy compare failed",
ErrFuzzyRedirect: "fuzzy redirect",
ErrFuzzyNotUnique: "not unique",
ErrUrlError: "url parse error",
ErrResponseError: "response parse error",
}
func (e ErrorType) Error() string {
return ErrMap[e]
}

View File

@ -1,17 +0,0 @@
package pkg
import (
"bytes"
"github.com/chainreactors/fingers/common"
)
// gogo fingers engine
func FingersDetect(content []byte) common.Frameworks {
frames, _ := FingerEngine.Fingers().HTTPMatch(bytes.ToLower(content), "")
return frames
}
func EngineDetect(content []byte) common.Frameworks {
frames, _ := FingerEngine.DetectContent(content)
return frames
}

103
pkg/ihttp/client.go Normal file
View File

@ -0,0 +1,103 @@
package ihttp
import (
"context"
"crypto/tls"
"fmt"
"github.com/valyala/fasthttp"
"net/http"
"time"
)
var (
DefaultMaxBodySize = 1024 * 100 // 100k
)
const (
FAST = iota
STANDARD
)
func NewClient(thread int, timeout int, clientType int) *Client {
if clientType == FAST {
return &Client{
fastClient: &fasthttp.Client{
TLSConfig: &tls.Config{
Renegotiation: tls.RenegotiateOnceAsClient,
InsecureSkipVerify: true,
},
MaxConnsPerHost: thread * 3 / 2,
MaxIdleConnDuration: time.Duration(timeout) * time.Second,
MaxConnWaitTimeout: time.Duration(timeout) * time.Second,
ReadTimeout: time.Duration(timeout) * time.Second,
WriteTimeout: time.Duration(timeout) * time.Second,
ReadBufferSize: 16384, // 16k
MaxResponseBodySize: DefaultMaxBodySize,
NoDefaultUserAgentHeader: true,
DisablePathNormalizing: true,
DisableHeaderNamesNormalizing: true,
},
timeout: time.Duration(timeout) * time.Second,
clientType: clientType,
}
} else {
return &Client{
standardClient: &http.Client{
Transport: &http.Transport{
//Proxy: Proxy,
//TLSHandshakeTimeout : delay * time.Second,
TLSClientConfig: &tls.Config{
Renegotiation: tls.RenegotiateOnceAsClient,
InsecureSkipVerify: true,
},
MaxConnsPerHost: thread * 3 / 2,
IdleConnTimeout: time.Duration(timeout) * time.Second,
ReadBufferSize: 16384, // 16k
},
Timeout: time.Second * time.Duration(timeout),
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
},
timeout: time.Duration(timeout) * time.Second,
clientType: clientType,
}
}
}
type Client struct {
fastClient *fasthttp.Client
standardClient *http.Client
clientType int
timeout time.Duration
}
func (c *Client) TransToCheck() {
if c.fastClient != nil {
c.fastClient.MaxConnsPerHost = 1
} else if c.standardClient != nil {
}
}
func (c *Client) FastDo(ctx context.Context, req *fasthttp.Request) (*fasthttp.Response, error) {
resp := fasthttp.AcquireResponse()
err := c.fastClient.Do(req, resp)
return resp, err
}
func (c *Client) StandardDo(ctx context.Context, req *http.Request) (*http.Response, error) {
return c.standardClient.Do(req)
}
func (c *Client) Do(ctx context.Context, req *Request) (*Response, error) {
if c.fastClient != nil {
resp, err := c.FastDo(ctx, req.FastRequest)
return &Response{FastResponse: resp, ClientType: FAST}, err
} else if c.standardClient != nil {
resp, err := c.StandardDo(ctx, req.StandardRequest)
return &Response{StandardResponse: resp, ClientType: STANDARD}, err
} else {
return nil, fmt.Errorf("not found client")
}
}

View File

@ -1,26 +1,31 @@
package ihttp
import (
"context"
"github.com/chainreactors/spray/pkg"
"github.com/valyala/fasthttp"
"net/http"
"strings"
)
func BuildRequest(ctx context.Context, clientType int, base, path, host, method string) (*Request, error) {
func BuildPathRequest(clientType int, base, path string) (*Request, error) {
if clientType == FAST {
req := fasthttp.AcquireRequest()
req.Header.SetMethod(method)
req.SetRequestURI(base + path)
if host != "" {
req.SetHost(host)
}
req.SetRequestURI(safeUrlJoin(base, path))
return &Request{FastRequest: req, ClientType: FAST}, nil
} else {
req, err := http.NewRequestWithContext(ctx, method, base+path, nil)
if host != "" {
req.Host = host
}
req, err := http.NewRequest("GET", safeUrlJoin(base, path), nil)
return &Request{StandardRequest: req, ClientType: STANDARD}, err
}
}
func BuildHostRequest(clientType int, base, host string) (*Request, error) {
if clientType == FAST {
req := fasthttp.AcquireRequest()
req.SetRequestURI(base)
req.SetHost(host)
return &Request{FastRequest: req, ClientType: FAST}, nil
} else {
req, err := http.NewRequest("GET", base, nil)
req.Host = host
return &Request{StandardRequest: req, ClientType: STANDARD}, err
}
}
@ -31,18 +36,14 @@ type Request struct {
ClientType int
}
func (r *Request) SetHeaders(header http.Header, RandomUA bool) {
if RandomUA {
r.SetHeader("User-Agent", pkg.RandomUA())
}
func (r *Request) SetHeaders(header map[string]string) {
if r.StandardRequest != nil {
r.StandardRequest.Header = header
for k, v := range header {
r.StandardRequest.Header.Set(k, v)
}
} else if r.FastRequest != nil {
for k, v := range header {
for _, i := range v {
r.FastRequest.Header.Set(k, i)
}
r.FastRequest.Header.Set(k, v)
}
}
}
@ -74,3 +75,15 @@ func (r *Request) Host() string {
return ""
}
}
func safeUrlJoin(base, uri string) string {
if uri == "" {
// 如果url为空, 则直接对原样的url请求
return base
}
if !strings.HasSuffix(base, "/") && !strings.HasPrefix(uri, "/") {
return base + "/" + uri
} else {
return base + uri
}
}

View File

@ -1,8 +1,8 @@
package ihttp
import (
"bytes"
"github.com/chainreactors/logs"
"github.com/chainreactors/utils/httputils"
"github.com/valyala/fasthttp"
"io"
"net/http"
@ -29,7 +29,7 @@ func (r *Response) Body() []byte {
if r.FastResponse != nil {
return r.FastResponse.Body()
} else if r.StandardResponse != nil {
if r.StandardResponse.ContentLength == -1 {
if DefaultMaxBodySize == 0 {
body, err := io.ReadAll(r.StandardResponse.Body)
if err != nil {
return nil
@ -37,10 +37,10 @@ func (r *Response) Body() []byte {
return body
} else {
var body []byte
if r.StandardResponse.ContentLength > 0 && CheckBodySize(r.StandardResponse.ContentLength) {
if r.StandardResponse.ContentLength > 0 && r.StandardResponse.ContentLength < int64(DefaultMaxBodySize) {
body = make([]byte, r.StandardResponse.ContentLength)
} else {
return nil
body = make([]byte, DefaultMaxBodySize)
}
n, err := io.ReadFull(r.StandardResponse.Body, body)
@ -49,11 +49,10 @@ func (r *Response) Body() []byte {
return body
} else if err == io.ErrUnexpectedEOF {
return body[:n]
} else if err == io.EOF {
return nil
} else {
logs.Log.Error("readfull failed, " + err.Error())
logs.Log.Error("readfull failed" + err.Error())
return nil
}
}
return nil
@ -62,11 +61,11 @@ func (r *Response) Body() []byte {
}
}
func (r *Response) ContentLength() int64 {
func (r *Response) ContentLength() int {
if r.FastResponse != nil {
return int64(r.FastResponse.Header.ContentLength())
return r.FastResponse.Header.ContentLength()
} else if r.StandardResponse != nil {
return r.StandardResponse.ContentLength
return int(r.StandardResponse.ContentLength)
} else {
return 0
}
@ -93,7 +92,13 @@ func (r *Response) Header() []byte {
if r.FastResponse != nil {
return r.FastResponse.Header.Header()
} else if r.StandardResponse != nil {
return append(httputils.ReadRawHeader(r.StandardResponse), []byte("\r\n")...)
var header bytes.Buffer
for k, v := range r.StandardResponse.Header {
for _, i := range v {
header.WriteString(k + ": " + i + "\r\n")
}
}
return header.Bytes()
} else {
return nil
}

View File

@ -1,136 +0,0 @@
package pkg
import (
"fmt"
"github.com/chainreactors/fingers"
"github.com/chainreactors/parsers"
"github.com/chainreactors/utils"
"github.com/chainreactors/utils/iutils"
"github.com/chainreactors/words/mask"
"os"
yaml "sigs.k8s.io/yaml/goyaml.v3"
"strings"
)
func LoadPorts() error {
var err error
var ports []*utils.PortConfig
err = yaml.Unmarshal(LoadConfig("port"), &ports)
if err != nil {
return err
}
utils.PrePort = utils.NewPortPreset(ports)
return nil
}
func LoadFingers() error {
var err error
FingerEngine, err = fingers.NewEngine()
if err != nil {
return err
}
for _, f := range FingerEngine.Fingers().HTTPFingers {
for _, rule := range f.Rules {
if rule.SendDataStr != "" {
ActivePath = append(ActivePath, rule.SendDataStr)
}
}
}
for _, f := range FingerEngine.FingerPrintHub().FingerPrints {
if f.Path != "/" {
ActivePath = append(ActivePath, f.Path)
}
}
return nil
}
func LoadTemplates() error {
var err error
// load rule
err = yaml.Unmarshal(LoadConfig("spray_rule"), &Rules)
if err != nil {
return err
}
// load default words
var dicts map[string]string
err = yaml.Unmarshal(LoadConfig("spray_dict"), &dicts)
if err != nil {
return err
}
for name, wordlist := range dicts {
dict := strings.Split(strings.TrimSpace(wordlist), "\n")
for i, d := range dict {
dict[i] = strings.TrimSpace(d)
}
Dicts[strings.TrimSuffix(name, ".txt")] = dict
}
// load mask
var keywords map[string]interface{}
err = yaml.Unmarshal(LoadConfig("spray_common"), &keywords)
if err != nil {
return err
}
for k, v := range keywords {
t := make([]string, len(v.([]interface{})))
for i, vv := range v.([]interface{}) {
t[i] = iutils.ToString(vv)
}
mask.SpecialWords[k] = t
}
var extracts []*parsers.Extractor
err = yaml.Unmarshal(LoadConfig("extract"), &extracts)
if err != nil {
return err
}
for _, extract := range extracts {
extract.Compile()
ExtractRegexps[extract.Name] = []*parsers.Extractor{extract}
for _, tag := range extract.Tags {
if _, ok := ExtractRegexps[tag]; !ok {
ExtractRegexps[tag] = []*parsers.Extractor{extract}
} else {
ExtractRegexps[tag] = append(ExtractRegexps[tag], extract)
}
}
}
return nil
}
func LoadExtractorConfig(filename string) ([]*parsers.Extractor, error) {
var extracts []*parsers.Extractor
content, err := os.ReadFile(filename)
if err != nil {
return nil, err
}
err = yaml.Unmarshal(content, &extracts)
if err != nil {
return nil, err
}
for _, extract := range extracts {
extract.Compile()
}
return extracts, nil
}
func Load() error {
err := LoadPorts()
if err != nil {
return fmt.Errorf("load ports, %w", err)
}
err = LoadTemplates()
if err != nil {
return fmt.Errorf("load templates, %w", err)
}
return nil
}

View File

@ -1,26 +0,0 @@
package pkg
import "strings"
var (
SkipChar = "%SKIP%"
EXTChar = "%EXT%"
)
func ParseEXTPlaceholderFunc(exts []string) func(string) []string {
return func(s string) []string {
ss := make([]string, len(exts))
var n int
for i, e := range exts {
if strings.Contains(s, EXTChar) {
n++
ss[i] = strings.Replace(s, EXTChar, e, -1)
}
}
if n == 0 {
return []string{s}
} else {
return ss
}
}
}

View File

@ -5,7 +5,6 @@ import (
"encoding/json"
"fmt"
"github.com/chainreactors/logs"
"github.com/chainreactors/parsers"
"io/ioutil"
"strconv"
"strings"
@ -18,7 +17,6 @@ func NewStatistor(url string) *Statistor {
stat := DefaultStatistor
stat.StartTime = time.Now().Unix()
stat.Counts = make(map[int]int)
stat.Sources = make(map[parsers.SpraySource]int)
stat.BaseUrl = url
return &stat
}
@ -32,48 +30,36 @@ func NewStatistorFromStat(origin *Statistor) *Statistor {
RuleFiles: origin.RuleFiles,
RuleFilter: origin.RuleFilter,
Counts: make(map[int]int),
Sources: map[parsers.SpraySource]int{},
StartTime: time.Now().Unix(),
}
}
type Statistor struct {
BaseUrl string `json:"url"`
Error string `json:"error"`
Counts map[int]int `json:"counts"`
Sources map[parsers.SpraySource]int `json:"sources"`
FailedNumber int32 `json:"failed"`
ReqTotal int32 `json:"req_total"`
CheckNumber int `json:"check"`
FoundNumber int `json:"found"`
FilteredNumber int `json:"filtered"`
FuzzyNumber int `json:"fuzzy"`
WafedNumber int `json:"wafed"`
End int `json:"end"`
Skipped int `json:"skipped"`
Offset int `json:"offset"`
Total int `json:"total"`
StartTime int64 `json:"start_time"`
EndTime int64 `json:"end_time"`
WordCount int `json:"word_count"`
Word string `json:"word"`
Dictionaries []string `json:"dictionaries"`
RuleFiles []string `json:"rule_files"`
RuleFilter string `json:"rule_filter"`
BaseUrl string `json:"url"`
Counts map[int]int `json:"counts"`
FailedNumber int32 `json:"failed"`
ReqTotal int32 `json:"req_total"`
CheckNumber int `json:"check"`
FoundNumber int `json:"found"`
FilteredNumber int `json:"filtered"`
FuzzyNumber int `json:"fuzzy"`
WafedNumber int `json:"wafed"`
End int `json:"end"`
Offset int `json:"offset"`
Total int `json:"total"`
StartTime int64 `json:"start_time"`
EndTime int64 `json:"end_time"`
WordCount int `json:"word_count"`
Word string `json:"word"`
Dictionaries []string `json:"dictionaries"`
RuleFiles []string `json:"rule_files"`
RuleFilter string `json:"rule_filter"`
}
func (stat *Statistor) ColorString() string {
var s strings.Builder
s.WriteString(fmt.Sprintf("[stat] %s took %d s, request total: %s, finish: %s/%s(%s skipped), found: %s, check: %s, failed: %s",
logs.GreenLine(stat.BaseUrl),
stat.EndTime-stat.StartTime,
logs.YellowBold(strconv.Itoa(int(stat.ReqTotal))),
logs.YellowBold(strconv.Itoa(stat.End)),
logs.YellowBold(strconv.Itoa(stat.Total)),
logs.YellowLine(strconv.Itoa(stat.Skipped)),
logs.YellowBold(strconv.Itoa(stat.FoundNumber)),
logs.YellowBold(strconv.Itoa(stat.CheckNumber)),
logs.YellowBold(strconv.Itoa(int(stat.FailedNumber)))))
s.WriteString(fmt.Sprintf("[stat] %s took %d s, request total: %s, finish: %s/%s, found: %s, check: %s, failed: %s", logs.GreenLine(stat.BaseUrl), stat.EndTime-stat.StartTime, logs.YellowBold(strconv.Itoa(int(stat.ReqTotal))), logs.YellowBold(strconv.Itoa(stat.End)), logs.YellowBold(strconv.Itoa(stat.Total)), logs.YellowBold(strconv.Itoa(stat.FoundNumber)), logs.YellowBold(strconv.Itoa(stat.CheckNumber)), logs.YellowBold(strconv.Itoa(int(stat.FailedNumber)))))
if stat.FuzzyNumber != 0 {
s.WriteString(", fuzzy: " + logs.Yellow(strconv.Itoa(stat.FuzzyNumber)))
@ -88,16 +74,7 @@ func (stat *Statistor) ColorString() string {
}
func (stat *Statistor) String() string {
var s strings.Builder
s.WriteString(fmt.Sprintf("[stat] %s took %d s, request total: %d, finish: %d/%d(%d skipped), found: %d, check: %d, failed: %d",
stat.BaseUrl,
stat.EndTime-stat.StartTime,
stat.ReqTotal,
stat.End,
stat.Total,
stat.Skipped,
stat.FoundNumber,
stat.CheckNumber,
stat.FailedNumber))
s.WriteString(fmt.Sprintf("[stat] %s took %d s, request total: %d, finish: %d/%d, found: %d, check: %d, failed: %d", stat.BaseUrl, stat.EndTime-stat.StartTime, stat.ReqTotal, stat.End, stat.Total, stat.FoundNumber, stat.CheckNumber, stat.FailedNumber))
if stat.FuzzyNumber != 0 {
s.WriteString(", fuzzy: " + strconv.Itoa(stat.FuzzyNumber))
@ -111,10 +88,7 @@ func (stat *Statistor) String() string {
return s.String()
}
func (stat *Statistor) CountString() string {
if len(stat.Counts) == 0 {
return ""
}
func (stat *Statistor) Detail() string {
var s strings.Builder
s.WriteString("[stat] ")
s.WriteString(stat.BaseUrl)
@ -127,42 +101,15 @@ func (stat *Statistor) CountString() string {
return s.String()
}
func (stat *Statistor) SourceString() string {
if len(stat.Sources) == 0 {
return ""
}
func (stat *Statistor) ColorDetail() string {
var s strings.Builder
s.WriteString("[stat] ")
s.WriteString(stat.BaseUrl)
for k, v := range stat.Sources {
s.WriteString(fmt.Sprintf(" %s: %d,", k.Name(), v))
}
return s.String()
}
func (stat *Statistor) ColorCountString() string {
if len(stat.Counts) == 0 {
return ""
}
var s strings.Builder
s.WriteString(fmt.Sprintf("[stat] %s ", stat.BaseUrl))
for k, v := range stat.Counts {
if k == 0 {
continue
}
s.WriteString(fmt.Sprintf(" %s: %s,", logs.Cyan(strconv.Itoa(k)), logs.YellowBold(strconv.Itoa(v))))
}
return s.String()
}
func (stat *Statistor) ColorSourceString() string {
if len(stat.Sources) == 0 {
return ""
}
var s strings.Builder
s.WriteString(fmt.Sprintf("[stat] %s ", stat.BaseUrl))
for k, v := range stat.Sources {
s.WriteString(fmt.Sprintf(" %s: %s,", logs.Cyan(k.Name()), logs.YellowBold(strconv.Itoa(v))))
s.WriteString(fmt.Sprintf(" %s: %s,", logs.YellowBold(strconv.Itoa(k)), logs.YellowBold(strconv.Itoa(v))))
}
return s.String()
}
@ -189,7 +136,6 @@ func ReadStatistors(filename string) (Statistors, error) {
}
stats = append(stats, &stat)
}
return stats, nil
}

29
pkg/types.go Normal file
View File

@ -0,0 +1,29 @@
package pkg
import (
"github.com/chainreactors/gogo/v2/pkg/fingers"
"github.com/chainreactors/parsers"
"strings"
)
type Frameworks []*parsers.Framework
func (fs Frameworks) String() string {
frameworkStrs := make([]string, len(fs))
for i, f := range fs {
frameworkStrs[i] = " [" + f.String() + "]"
}
return strings.Join(frameworkStrs, " ") + " "
}
type Extracteds []*fingers.Extracted
func (es Extracteds) String() string {
var s strings.Builder
for _, e := range es {
s.WriteString("[ " + e.ToString() + " ]")
}
return s.String() + " "
}
var Extractors = make(fingers.Extractors)

View File

@ -1,57 +1,40 @@
package pkg
import (
"bufio"
"bytes"
"github.com/chainreactors/files"
"github.com/chainreactors/fingers"
"github.com/chainreactors/logs"
"github.com/chainreactors/parsers"
"github.com/chainreactors/utils/iutils"
"encoding/json"
"github.com/chainreactors/gogo/v2/pkg/fingers"
"github.com/chainreactors/gogo/v2/pkg/utils"
"github.com/chainreactors/ipcs"
"github.com/chainreactors/words/mask"
"github.com/chainreactors/words/rule"
"github.com/expr-lang/expr"
"github.com/expr-lang/expr/vm"
"io/ioutil"
"math/rand"
"net/http"
"net/url"
"os"
"path"
"path/filepath"
"strconv"
"regexp"
"strings"
"time"
"unsafe"
)
var (
LogVerbose = logs.Warn - 2
LogFuzz = logs.Warn - 1
DefaultWhiteStatus = []int{200} // cmd input
DefaultBlackStatus = []int{400, 410} // cmd input
DefaultFuzzyStatus = []int{500, 501, 502, 503, 301, 302, 404} // cmd input
DefaultUniqueStatus = []int{403, 200, 404} // 相同unique的403表示命中了同一条acl, 相同unique的200表示default页面
WhiteStatus = []int{} // cmd input, 200
BlackStatus = []int{} // cmd input, 400,410
FuzzyStatus = []int{} // cmd input, 500,501,502,503
WAFStatus = []int{493, 418, 1020, 406, 429, 406, 412}
UniqueStatus = []int{} // 相同unique的403表示命中了同一条acl, 相同unique的200表示default页面
Md5Fingers map[string]string = make(map[string]string)
Mmh3Fingers map[string]string = make(map[string]string)
Rules map[string]string = make(map[string]string)
ActivePath []string
Fingers fingers.Fingers
JSRegexps []*regexp.Regexp = []*regexp.Regexp{
regexp.MustCompile(`.(https{0,1}:[^\s^'^,^^"^”^>^<^;^(^)^|^*^\[]{2,250}?[^=^*^\s^'^^"^”^>^<^:^;^*^|^(^)^\[]{3}[.]js)`),
regexp.MustCompile(`["'‘“]\s{0,6}(/{0,1}[^\s^,^'^^"^”^|^>^<^:^;^*^(^\)^\[]{2,250}?[^=^*^\s^'^^|^"^”^>^<^:^;^*^(^)^\[]{3}[.]js)`),
regexp.MustCompile(`=\s{0,6}["'’”]{0,1}\s{0,6}(/{0,1}[^\s^'^,^^"^”^>^<^;^(^)^|^*^\[]{2,250}?[^=^,^*^\s^'^^"^”^>^|^<^:^;^*^(^)^\[]{3}[.]js)`),
}
URLRegexps []*regexp.Regexp = []*regexp.Regexp{
regexp.MustCompile(`["'‘“]\s{0,6}(https{0,1}:[^\s^,^'^^"^”^>^<^),^(]{2,250}?)\s{0,6}["'‘“]`),
regexp.MustCompile(`=\s{0,6}(https{0,1}:[^\s^'^,^^"^”^>^<^;^(^)^|^*^\[]{2,250})`),
regexp.MustCompile(`["']([\w/]{2,250}?\.\w{2,4}?)["']`),
regexp.MustCompile(`["'‘“]\s{0,6}([#,.]{0,2}/[^\s^'^,^^"^”^>^<^;^(^)^|^*^\[]{2,250}?)\s{0,6}["'‘“]`),
regexp.MustCompile(`href\s{0,6}=\s{0,6}["'‘“]{0,1}\s{0,6}([^\s^'^,^^"^”^>^<^;^(^)^|^*^\[]{2,250})|action\s{0,6}=\s{0,6}["'‘“]{0,1}\s{0,6}([^\s^'^^"^“^>^<^)^(]{2,250})`),
}
// plugins
EnableAllFingerEngine = false
)
var (
Rules map[string]string = make(map[string]string)
Dicts map[string][]string = make(map[string][]string)
wordlistCache = make(map[string][]string)
ruleCache = make(map[string][]rule.Expression)
BadExt = []string{".js", ".css", ".scss", ".,", ".jpeg", ".jpg", ".png", ".gif", ".svg", ".vue", ".ts", ".swf", ".pdf", ".mp4", ".zip", ".rar"}
BadURL = []string{";", "}", "\\n", "webpack://", "{", "www.w3.org", ".src", ".url", ".att", ".href", "location.href", "javascript:", "location:", ".createObject", ":location", ".path"}
ExtractRegexps = make(parsers.Extractors)
Extractors = make(parsers.Extractors)
FingerEngine *fingers.Engine
ActivePath []string
ContentTypeMap = map[string]string{
"application/javascript": "js",
"application/json": "json",
@ -75,30 +58,52 @@ var (
"video/avi": "avi",
"image/x-icon": "ico",
}
// from feroxbuster
randomUserAgent = []string{
"Mozilla/5.0 (Linux; Android 8.0.0; SM-G960F Build/R16NW) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.84 Mobile Safari/537.36",
"Mozilla/5.0 (iPhone; CPU iPhone OS 12_0 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1",
"Mozilla/5.0 (Windows Phone 10.0; Android 6.0.1; Microsoft; RM-1152) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Mobile Safari/537.36 Edge/15.15254",
"Mozilla/5.0 (Linux; Android 7.0; Pixel C Build/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/52.0.2743.98 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.246",
"Mozilla/5.0 (X11; CrOS x86_64 8172.45.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.64 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/601.3.9 (KHTML, like Gecko) Version/9.0.2 Safari/601.3.9",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.111 Safari/537.36",
"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:15.0) Gecko/20100101 Firefox/15.0.1",
"Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)",
"Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)",
"Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)",
}
uacount = len(randomUserAgent)
DefaultUserAgent = randomUserAgent[rand.Intn(uacount)]
)
type BS []byte
func StringsContains(s []string, e string) bool {
for _, v := range s {
if v == e {
return true
}
}
return false
}
func (b BS) String() string {
return string(b)
func IntsContains(s []int, e int) bool {
for _, v := range s {
if v == e {
return true
}
}
return false
}
func RemoveDuplication(arr []string) []string {
set := make(map[string]struct{}, len(arr))
j := 0
for _, v := range arr {
_, ok := set[v]
if ok {
continue
}
set[v] = struct{}{}
arr[j] = v
j++
}
return arr[:j]
}
func HasStdin() bool {
stat, err := os.Stdin.Stat()
if err != nil {
return false
}
isPipedFromChrDev := (stat.Mode() & os.ModeCharDevice) == 0
isPipedFromFIFO := (stat.Mode() & os.ModeNamedPipe) != 0
return isPipedFromChrDev || isPipedFromFIFO
}
const letters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
@ -151,7 +156,81 @@ func RandHost() string {
return *(*string)(unsafe.Pointer(&b))
}
func FilterJs(u string) bool {
func LoadTemplates() error {
var err error
// load fingers
Fingers, err = fingers.LoadFingers(LoadConfig("http"))
if err != nil {
return err
}
for _, finger := range Fingers {
err := finger.Compile(ipcs.ParsePorts)
if err != nil {
return err
}
}
for _, f := range Fingers {
for _, rule := range f.Rules {
if rule.SendDataStr != "" {
ActivePath = append(ActivePath, rule.SendDataStr)
}
if rule.Favicon != nil {
for _, mmh3 := range rule.Favicon.Mmh3 {
Mmh3Fingers[mmh3] = f.Name
}
for _, md5 := range rule.Favicon.Md5 {
Md5Fingers[md5] = f.Name
}
}
}
}
// load rule
var data map[string]interface{}
err = json.Unmarshal(LoadConfig("rule"), &data)
if err != nil {
return err
}
for k, v := range data {
Rules[k] = v.(string)
}
// load mask
var keywords map[string]interface{}
err = json.Unmarshal(LoadConfig("mask"), &keywords)
if err != nil {
return err
}
for k, v := range keywords {
t := make([]string, len(v.([]interface{})))
for i, vv := range v.([]interface{}) {
t[i] = utils.ToString(vv)
}
mask.SpecialWords[k] = t
}
return nil
}
func FingerDetect(content string) Frameworks {
var frames Frameworks
for _, finger := range Fingers {
frame, _, ok := fingers.FingerMatcher(finger, content, 0, nil)
if ok {
frames = append(frames, frame)
}
}
return frames
}
var (
BadExt = []string{".js", ".css", ".scss", ".,", ".jpeg", ".jpg", ".png", ".gif", ".svg", ".vue", ".ts", ".swf", ".pdf"}
BadURL = []string{";", "}", "{", "www.w3.org", "example.com", ".src", ".url", ".att", ".href", "location.href", "javascript:", "location:", ".createObject", ":location", ".path", "*#__PURE__*"}
)
func filterJs(u string) bool {
if commonFilter(u) {
return true
}
@ -159,7 +238,7 @@ func FilterJs(u string) bool {
return false
}
func FilterUrl(u string) bool {
func filterUrl(u string) bool {
if commonFilter(u) {
return true
}
@ -178,20 +257,8 @@ func FilterUrl(u string) bool {
return false
}
func CleanURL(u string) string {
func formatURL(u string) string {
// 去掉frag与params, 节约url.parse性能, 防止带参数造成意外的影响
u = strings.Trim(u, "\"")
u = strings.Trim(u, "'")
if strings.Contains(u, "2f") || strings.Contains(u, "2F") {
u = strings.ReplaceAll(u, "\\u002F", "/")
u = strings.ReplaceAll(u, "\\u002f", "/")
u = strings.ReplaceAll(u, "%252F", "/")
u = strings.ReplaceAll(u, "%252f", "/")
u = strings.ReplaceAll(u, "%2f", "/")
u = strings.ReplaceAll(u, "%2F", "/")
}
u = strings.TrimRight(u, "\\")
if i := strings.Index(u, "?"); i != -1 {
return u[:i]
}
@ -206,411 +273,35 @@ func commonFilter(u string) bool {
return true
}
for _, bad := range BadURL {
if strings.Contains(u, bad) {
for _, scoop := range BadURL {
if strings.Contains(u, scoop) {
return true
}
}
return false
}
func URLJoin(base, uri string) string {
baseSlash := strings.HasSuffix(base, "/")
uriSlash := strings.HasPrefix(uri, "/")
if (baseSlash && !uriSlash) || (!baseSlash && uriSlash) {
return base + uri
} else if baseSlash && uriSlash {
return base + uri[1:]
} else {
return base + "/" + uri
}
}
func BakGenerator(domain string) []string {
var possibilities []string
for first, _ := range domain {
for last, _ := range domain[first:] {
p := domain[first : first+last+1]
if !iutils.StringsContains(possibilities, p) {
if !StringsContains(possibilities, p) {
possibilities = append(possibilities, p)
}
}
}
return possibilities
}
var MbTable = []uint16{
0x0000, 0xC0C1, 0xC181, 0x0140, 0xC301, 0x03C0, 0x0280, 0xC241,
0xC601, 0x06C0, 0x0780, 0xC741, 0x0500, 0xC5C1, 0xC481, 0x0440,
0xCC01, 0x0CC0, 0x0D80, 0xCD41, 0x0F00, 0xCFC1, 0xCE81, 0x0E40,
0x0A00, 0xCAC1, 0xCB81, 0x0B40, 0xC901, 0x09C0, 0x0880, 0xC841,
0xD801, 0x18C0, 0x1980, 0xD941, 0x1B00, 0xDBC1, 0xDA81, 0x1A40,
0x1E00, 0xDEC1, 0xDF81, 0x1F40, 0xDD01, 0x1DC0, 0x1C80, 0xDC41,
0x1400, 0xD4C1, 0xD581, 0x1540, 0xD701, 0x17C0, 0x1680, 0xD641,
0xD201, 0x12C0, 0x1380, 0xD341, 0x1100, 0xD1C1, 0xD081, 0x1040,
0xF001, 0x30C0, 0x3180, 0xF141, 0x3300, 0xF3C1, 0xF281, 0x3240,
0x3600, 0xF6C1, 0xF781, 0x3740, 0xF501, 0x35C0, 0x3480, 0xF441,
0x3C00, 0xFCC1, 0xFD81, 0x3D40, 0xFF01, 0x3FC0, 0x3E80, 0xFE41,
0xFA01, 0x3AC0, 0x3B80, 0xFB41, 0x3900, 0xF9C1, 0xF881, 0x3840,
0x2800, 0xE8C1, 0xE981, 0x2940, 0xEB01, 0x2BC0, 0x2A80, 0xEA41,
0xEE01, 0x2EC0, 0x2F80, 0xEF41, 0x2D00, 0xEDC1, 0xEC81, 0x2C40,
0xE401, 0x24C0, 0x2580, 0xE541, 0x2700, 0xE7C1, 0xE681, 0x2640,
0x2200, 0xE2C1, 0xE381, 0x2340, 0xE101, 0x21C0, 0x2080, 0xE041,
0xA001, 0x60C0, 0x6180, 0xA141, 0x6300, 0xA3C1, 0xA281, 0x6240,
0x6600, 0xA6C1, 0xA781, 0x6740, 0xA501, 0x65C0, 0x6480, 0xA441,
0x6C00, 0xACC1, 0xAD81, 0x6D40, 0xAF01, 0x6FC0, 0x6E80, 0xAE41,
0xAA01, 0x6AC0, 0x6B80, 0xAB41, 0x6900, 0xA9C1, 0xA881, 0x6840,
0x7800, 0xB8C1, 0xB981, 0x7940, 0xBB01, 0x7BC0, 0x7A80, 0xBA41,
0xBE01, 0x7EC0, 0x7F80, 0xBF41, 0x7D00, 0xBDC1, 0xBC81, 0x7C40,
0xB401, 0x74C0, 0x7580, 0xB541, 0x7700, 0xB7C1, 0xB681, 0x7640,
0x7200, 0xB2C1, 0xB381, 0x7340, 0xB101, 0x71C0, 0x7080, 0xB041,
0x5000, 0x90C1, 0x9181, 0x5140, 0x9301, 0x53C0, 0x5280, 0x9241,
0x9601, 0x56C0, 0x5780, 0x9741, 0x5500, 0x95C1, 0x9481, 0x5440,
0x9C01, 0x5CC0, 0x5D80, 0x9D41, 0x5F00, 0x9FC1, 0x9E81, 0x5E40,
0x5A00, 0x9AC1, 0x9B81, 0x5B40, 0x9901, 0x59C0, 0x5880, 0x9841,
0x8801, 0x48C0, 0x4980, 0x8941, 0x4B00, 0x8BC1, 0x8A81, 0x4A40,
0x4E00, 0x8EC1, 0x8F81, 0x4F40, 0x8D01, 0x4DC0, 0x4C80, 0x8C41,
0x4400, 0x84C1, 0x8581, 0x4540, 0x8701, 0x47C0, 0x4680, 0x8641,
0x8201, 0x42C0, 0x4380, 0x8341, 0x4100, 0x81C1, 0x8081, 0x4040}
func CRC16Hash(data []byte) uint16 {
var crc16 uint16
crc16 = 0xffff
for _, v := range data {
n := uint8(uint16(v) ^ crc16)
crc16 >>= 8
crc16 ^= MbTable[n]
}
return crc16
}
func SafePath(dir, u string) string {
hasSlash := strings.HasPrefix(u, "/")
if hasSlash {
return dir + u[1:]
} else {
return dir + u
}
}
func RelaPath(base, u string) string {
// 拼接相对目录, 不使用path.join的原因是, 如果存在"////"这样的情况, 可能真的是有意义的路由, 不能随意去掉.
// "" /a /a
// "" a /a
// / "" /
// /a/ b /a/b
// /a/ /b /a/b
// /a b /b
// /a /b /b
if u == "" {
return base
}
pathSlash := strings.HasPrefix(u, "/")
if base == "" {
if pathSlash {
return u[1:]
} else {
return "/" + u
}
} else if strings.HasSuffix(base, "/") {
if pathSlash {
return base + u[1:]
} else {
return base + u
}
} else {
if pathSlash {
return Dir(base) + u[1:]
} else {
return Dir(base) + u
}
}
}
func Dir(u string) string {
// 安全的获取目录, 不会额外处理多个"//", 并非用来获取上级目录
// /a /
// /a/ /a/
// a/ a/
// aaa /
if strings.HasSuffix(u, "/") {
return u
} else if i := strings.LastIndex(u, "/"); i == -1 {
return "/"
} else {
return u[:i+1]
}
}
func FormatURL(base, u string) string {
if strings.HasPrefix(u, "http") {
parsed, err := url.Parse(u)
if err != nil {
return ""
}
return parsed.Path
} else if strings.HasPrefix(u, "//") {
parsed, err := url.Parse(u)
if err != nil {
return ""
}
return parsed.Path
} else if strings.HasPrefix(u, "/") {
// 绝对目录拼接
// 不需要进行处理, 用来跳过下面的判断
return u
} else if strings.HasPrefix(u, "./") {
// "./"相对目录拼接
return RelaPath(base, u[2:])
} else if strings.HasPrefix(u, "../") {
return path.Join(Dir(base), u)
} else {
// 相对目录拼接
return RelaPath(base, u)
}
}
func BaseURL(u *url.URL) string {
return u.Scheme + "://" + u.Host
}
func RandomUA() string {
return randomUserAgent[rand.Intn(uacount)]
}
func CompareWithExpr(exp *vm.Program, params map[string]interface{}) bool {
res, err := expr.Run(exp, params)
if err != nil {
logs.Log.Warn(err.Error())
}
if res == true {
return true
} else {
return false
}
}
func MatchWithGlobs(u string, globs []string) bool {
for _, glob := range globs {
ok, err := filepath.Match(glob, u)
if err == nil && ok {
return true
}
}
return false
}
func ParseRawResponse(raw []byte) (*http.Response, error) {
reader := bytes.NewReader(raw)
// 使用http.ReadResponse解析HTTP响应
resp, err := http.ReadResponse(bufio.NewReader(reader), nil)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return resp, nil
}
func GetPresetWordList(key []string) []string {
var wordlist []string
for _, k := range key {
if v, ok := mask.SpecialWords[k]; ok {
wordlist = append(wordlist, v...)
}
}
return wordlist
}
func ParseExtension(s string) string {
if i := strings.Index(s, "."); i != -1 {
return s[i+1:]
}
return ""
}
// ParseStatus parses the input string and updates the preset status filters.
func ParseStatus(preset []int, changed string) []int {
if changed == "" {
return preset
}
parseToken := func(s string) (int, bool) {
s = strings.TrimSpace(s)
if strings.HasSuffix(s, "*") {
prefix := s[:len(s)-1]
if t, err := strconv.Atoi(prefix); err == nil {
return t, true // isPrefix = true
}
} else if t, err := strconv.Atoi(s); err == nil {
return t, false // isPrefix = false
}
return 0, false
}
if strings.HasPrefix(changed, "+") {
for _, s := range strings.Split(changed[1:], ",") {
if t, _ := parseToken(s); t != 0 {
preset = append(preset, t)
}
}
} else if strings.HasPrefix(changed, "!") {
for _, s := range strings.Split(changed[1:], ",") {
if t, _ := parseToken(s); t != 0 {
newPreset := preset[:0]
for _, val := range preset {
if val != t {
newPreset = append(newPreset, val)
}
}
preset = newPreset
}
}
} else {
preset = []int{}
for _, s := range strings.Split(changed, ",") {
if t, _ := parseToken(s); t != 0 {
preset = append(preset, t)
}
}
}
return UniqueInts(preset)
}
func UniqueInts(input []int) []int {
seen := make(map[int]bool)
result := make([]int, 0, len(input))
for _, val := range input {
if !seen[val] {
seen[val] = true
result = append(result, val)
}
}
return result
}
// StatusContain checks if a status matches any of the preset filters.
// Preset values < 100 are treated as prefix filters (e.g. 5 = 5xx, 51 = 51x).
func StatusContain(preset []int, status int) bool {
if len(preset) == 0 {
return true
}
for _, s := range preset {
if s < 10 {
if status/100 == s {
return true
}
} else if s < 100 {
if status/10 == s {
return true
}
} else if s == status {
return true
}
}
return false
}
func LoadFileToSlice(filename string) ([]string, error) {
var ss []string
if dicts, ok := Dicts[filename]; ok {
if files.IsExist(filename) {
logs.Log.Warnf("load and overwrite %s from preset", filename)
}
return dicts, nil
}
content, err := ioutil.ReadFile(filename)
if err != nil {
return nil, err
}
ss = strings.Split(strings.TrimSpace(string(content)), "\n")
// 统一windows与linux的回车换行差异
for i, word := range ss {
ss[i] = strings.TrimSpace(word)
}
return ss, nil
}
func LoadRuleAndCombine(filename []string) (string, error) {
var bs bytes.Buffer
for _, f := range filename {
if data, ok := Rules[f]; ok {
bs.WriteString(strings.TrimSpace(data))
bs.WriteString("\n")
} else {
content, err := ioutil.ReadFile(f)
if err != nil {
return "", err
}
bs.Write(bytes.TrimSpace(content))
bs.WriteString("\n")
}
}
return bs.String(), nil
}
func loadFileWithCache(filename string) ([]string, error) {
if dict, ok := Dicts[filename]; ok {
return dict, nil
}
dict, err := LoadFileToSlice(filename)
if err != nil {
return nil, err
}
Dicts[filename] = dict
return dict, nil
}
func loadDictionaries(filenames []string) ([][]string, error) {
dicts := make([][]string, len(filenames))
for i, name := range filenames {
dict, err := loadFileWithCache(name)
if err != nil {
return nil, err
}
dicts[i] = dict
}
return dicts, nil
}
func LoadWordlist(word string, dictNames []string) ([]string, error) {
if wl, ok := wordlistCache[word+strings.Join(dictNames, ",")]; ok {
return wl, nil
}
dicts, err := loadDictionaries(dictNames)
if err != nil {
return nil, err
}
wl, err := mask.Run(word, dicts, nil)
if err != nil {
return nil, err
}
wordlistCache[word] = wl
return wl, nil
}
func LoadRuleWithFiles(ruleFiles []string, filter string) ([]rule.Expression, error) {
if rules, ok := ruleCache[strings.Join(ruleFiles, ",")]; ok {
return rules, nil
}
var rules bytes.Buffer
for _, filename := range ruleFiles {
content, err := ioutil.ReadFile(filename)
if err != nil {
return nil, err
}
rules.Write(content)
rules.WriteString("\n")
}
return rule.Compile(rules.String(), filter).Expressions, nil
}
func WrapWordsFunc(f func(string) string) func(string) []string {
return func(s string) []string {
return []string{f(s)}
}
}
func SafeFilename(filename string) string {
filename = strings.ReplaceAll(filename, "http://", "")
filename = strings.ReplaceAll(filename, "https://", "")
filename = strings.ReplaceAll(filename, ":", "_")
filename = strings.ReplaceAll(filename, "/", "_")
return filename
}

View File

@ -1,20 +1,7 @@
//go:generate go run templates/templates_gen.go -t templates -o pkg/templates.go -need spray
//go:generate go run templates/templates_gen.go -t templates -o pkg/templates.go -need http,rule,mask
package main
import (
"github.com/chainreactors/spray/cmd"
"github.com/gookit/config/v2"
"github.com/gookit/config/v2/yaml"
//_ "net/http/pprof"
)
func init() {
config.WithOptions(func(opt *config.Options) {
opt.DecoderConfig.TagName = "config"
opt.ParseDefault = true
})
config.AddDriver(yaml.Driver)
}
import "github.com/chainreactors/spray/cmd"
func main() {
//f, _ := os.Create("cpu.txt")

@ -1 +0,0 @@
Subproject commit fe95f1f22d18b6cf2046b004191f5bd745f1c578