Compare commits

...

55 Commits

Author SHA1 Message Date
MadDogOwner
ffa03bfda1
feat(cloudreve_v4): add Cloudreve V4 driver (#8470 closes #8328 #8467)
* feat(cloudreve_v4): add Cloudreve V4 driver implementation

* fix(cloudreve_v4): update request handling to prevent token refresh loop

* feat(onedrive): implement retry logic for upload failures

* feat(cloudreve): implement retry logic for upload failures

* feat(cloudreve_v4): support cloud sorting

* fix(cloudreve_v4): improve token handling in Init method

* feat(cloudreve_v4): support share

* feat(cloudreve): support reference

* feat(cloudreve_v4): support version upload

* fix(cloudreve_v4): add SetBody in upLocal

* fix(cloudreve_v4): update URL structure in Link and FileUrlResp
2025-05-24 13:38:43 +08:00
Andy Hsu
630cf30af5 feat(115_open): implement rate limiting for API requests 2025-05-11 13:39:32 +08:00
Andy Hsu
bc5117fa4f fix(115_open): add delay in MakeDir function to handle rate limiting 2025-05-02 16:53:39 +08:00
yoclo
11e7284824
fix: prevent guest user from updating profile (#8447) 2025-04-29 23:14:16 +08:00
MadDogOwner
b2b91a9281
feat(doubao): add get_download_info API and download_api option (#8428) 2025-04-27 20:00:25 +08:00
MadDogOwner
f541489d7d
fix(netease_music): change ListResp size fields from string to int64 (#8417) 2025-04-27 19:59:30 +08:00
bigQY
6d9c554f6f
feat: add UseLargeThumbnail for 139 (#8424) 2025-04-27 19:58:45 +08:00
Mmx
e532ab31ef
fix: remove auth middleware for authn login (#8407) 2025-04-27 19:58:09 +08:00
Mmx
bf0705ec17
fix: shebang of entrypoint.sh (#8408) 2025-04-27 19:56:34 +08:00
gdm257
17b42b9fa4
fix(mega): use newest file for same filename (#8422 close #8344)
Mega supports duplicate names but alist does not support.
In `List()` method, driver will return multiple files with same name.
That makes alist to use oldest version file for listing/downloading.
So it is necessary to filter old same name files in a folder.
After fixes, all CRUD work normally.

Refs #8344
2025-04-27 19:56:04 +08:00
Sam- Pan(潘绍森)
41bdab49aa
fix(139): incorrect host (#8368)
* fix: correct new personal cloud path for 139Driver

* Update drivers/139/driver.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix bug

---------

Co-authored-by: panshaosen <19802021493@139.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: j2rong4cn <253551464@qq.com>
2025-04-19 14:29:12 +08:00
Lin Tianchuan
8f89c55aca
perf(local): avoid duplicate parsing of VideoThumbPos (#7812)
* feat(local): support percent for video thumbnail

The percentage determines the point in the video (as a percentage of the total duration) at which the thumbnail will be generated.

* feat(local): support both time and percent for video thumbnail

* refactor(local): avoid duplicate parsing of VideoThumbPos
2025-04-19 14:27:13 +08:00
wxnq
b449312da8
fix(docker_release): avoid duplicate occupation in docker image (#8393 close #8388)
* fix(ci): modify the method of adding permissions

* fix(build): modify the method of adding permissions(to keep up with ci)
2025-04-19 14:26:19 +08:00
MadDogOwner
52d4e8ec47
fix(lanzou): remove JavaScript comments from response data (#8386)
* feat(lanzou): add RemoveJSComment function to clean JavaScript comments from HTML

* feat(lanzou): remove comments from share page data in getFilesByShareUrl function

* fix(lanzou): optimize RemoveJSComment function to improve comment removal logic
2025-04-19 14:24:43 +08:00
New Future
28e5b5759e
feat(azure_blob): implement GetRootId interface in Addition struct (#8389)
fix failed get dir
2025-04-19 14:23:48 +08:00
asdfghjkl
477c43971f
feat(doubao_share): support doubao_share link (#8376)
Co-authored-by: anobodys <anobodys@gmail.com>
2025-04-19 14:22:43 +08:00
Yifan Gao
0a9921fa79
fix(aliyundrive_open): resolve file duplication issues and improve path handling (#8358)
* fix(aliyundrive_open): resolve file duplication issues and improve path handling

1. Fix file duplication by implementing a new removeDuplicateFiles method that cleans up duplicate files after operations
2. Change Move operation to use "ignore" for check_name_mode instead of "refuse" to allow moves when destination has same filename
3. Set Copy operation to handle duplicates by removing them after successful copy
4. Improve path handling for all file operations (Move, Rename, Put, MakeDir) by properly maintaining the full path of objects
5. Implement GetRoot interface for proper root object initialization with correct path
6. Add proper path management in List operation to ensure objects have correct paths
7. Fix path handling in error cases and improve logging of failures

* refactor(aliyundrive_open): change error logging to warnings for duplicate file removal

Updated the Move, Rename, and Copy methods to log warnings instead of errors when duplicate file removal fails, as the primary operations have already completed successfully. This improves the clarity of logs without affecting the functionality.

* Update drivers/aliyundrive_open/util.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-19 14:22:12 +08:00
Lee CQ
88abb323cb
feat(url-tree): implement the Put interface to support adding links directly to the UrlTree on the web side (#8312)
* feat(url-tree)支持PUT

* feat(url-tree) UrlTree更新时,需要将路径和内容分割 #8303

* fix: stdpath.Join call

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Andy Hsu <i@nn.ci>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-12 17:27:56 +08:00
asdfghjkl
f0b1aeaf8d
feat(doubao): support upload (#8302 close #8335)
* feat(doubao): support upload

* fix(doubao): fix file list cursor

* fix: handle strconv.Atoi err

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: anobodys <anobodys@gmail.com>
Co-authored-by: Andy Hsu <i@nn.ci>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-12 17:12:40 +08:00
Yifan Gao
c8470b9a2a
fix(fs): remove old target object from cache before updating (#8352) 2025-04-12 17:09:46 +08:00
Dgs
d0ee90cd11
fix(thunder): fix login issue (#8342 close #8288) 2025-04-12 17:05:58 +08:00
Dgs
544a7ea022
fix(pikpak&pikpak_share): fix WebPackageName (#8305) 2025-04-12 17:03:58 +08:00
j2rong4cn
4f5cabc725
feat: add h2c for http server (#8294)
* feat: add h2c for http server

* chore(config): add EnableH2c option
2025-04-12 17:02:51 +08:00
j2rong4cn
a2f266277c
fix(net): unexpected write (#8291 close #8281) 2025-04-12 17:01:52 +08:00
jerry
a4bfbf8a83
fix(ipfs): fix problems (#8252)
* fix: 🐛 (ipfs): fix the list error caused by not proper join path function

使用更加规范的路径拼接,修复了有中文或符号的路径无法正常访问的问题

* refactor: 命名规范

* 删除多余的条件判断

* fix: 使用withresult方法重构代码,添加get方法,提高性能

* fix: 允许get方法获取目录

去除多余的判断

* fix: 允许copy,rename,move进行覆写

* fix: 修复move方法导致的目录被删除

* refactor: 整理关于返回Path的代码

* fix: 修复由于get方法导致的ipfs路径无法访问

* fix: 修复path处理错误的get方法

修复get方法,删除意外加入的目录

* fix: fix path join

use path join instead of filepath join to avoid os problem

* fix: rm filepath ref

---------

Co-authored-by: Andy Hsu <i@nn.ci>
2025-04-12 17:01:30 +08:00
j2rong4cn
ddffacf07b
perf: optimize IO read/write usage (#8243)
* perf: optimize IO read/write usage

* .

* Update drivers/139/driver.go

Co-authored-by: MadDogOwner <xiaoran@xrgzs.top>

---------

Co-authored-by: MadDogOwner <xiaoran@xrgzs.top>
2025-04-12 16:55:31 +08:00
xiaoQQya
3375c26c41
perf(quark_uc&quark_uc_tv): native proxy multithreading (#8287)
* perf(quark_uc): native proxy multithreading

* perf(quark_uc_tv): native proxy multithreading

* chore(fs): file query result add id
2025-04-03 20:50:29 +08:00
asdfghjkl
ab68faef44
fix(baidu_netdisk): add another video crack api (#8275)
Co-authored-by: anobodys <anobodys@gmail.com>
2025-04-03 20:44:49 +08:00
New Future
2e21df0661
feat(driver): add Azure Blob Storage driver (#8261)
* add azure-blob driver

* fix nested folders copy

* feat(driver): add Azure Blob Storage driver

实现 Azure Blob Storage 驱动,支持以下功能:
- 使用共享密钥身份验证初始化连接
- 列出目录和文件
- 生成临时 SAS URL 进行文件访问
- 创建目录
- 移动和重命名文件/文件夹
- 复制文件/文件夹
- 删除文件/文件夹
- 上传文件并支持进度跟踪

此驱动允许用户通过 AList 平台无缝访问和管理 Azure Blob Storage 中的数据。

* feat(driver): update help doc for Azure Blob

* doc(readme): add new driver

* Update drivers/azure_blob/driver.go

fix(azure): fix name check

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update README.md

doc(readme): fix the link

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix(azure): fix log and link

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-03 20:43:21 +08:00
MadDogOwner
af18cb138b
feat(139): add option ReportRealSize (#8244 close #8141)
* feat(139): handle family upload errors

* feat(139): add option `ReportRealSize`

* Update drivers/139/driver.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-03 20:41:59 +08:00
j2rong4cn
31c55a2adf
fix(archive): unable to preview (#8248)
* fix(archive): unable to preview

* fix bug
2025-04-03 20:41:05 +08:00
MadDogOwner
465dd1703d
feat(cloudreve): s3 policy support (#8245)
* feat(cloudreve): s3 policy support

* fix(cloudreve): correct potential off-by-one error in `etags` initialization
2025-04-03 20:40:19 +08:00
j2rong4cn
a6304285b6
fix: revert "refactor(net): pass request header" (#8269)
5be50e77d9
2025-04-03 20:35:52 +08:00
YangXu
affd0cecd1
fix(pikpak&pikpak_share): update algorithms (#8278) 2025-04-03 20:35:14 +08:00
MadDogOwner
37640221c0
fix(doubao): update file size type to int64 (#8289) 2025-04-03 20:34:27 +08:00
Andy Hsu
e4bd223d1c fix(deps): update 115-sdk-go to v0.1.5 2025-04-03 20:29:53 +08:00
jerry
0cde4e73d6
feat(ipfs): better ipfs support (#8225)
* feat:  better ipfs support

fixed mfs crud, added ipns support

* Update driver.go

clean up
2025-03-27 23:25:23 +08:00
Ljcbaby
7b62dcb88c
fix(baidu_netdisk): deplicate retry (#8210 redo #7972, link #8180) 2025-03-27 23:22:55 +08:00
never lee
c38dc6df7c
fix(115_open): support multipart upload (#8229)
Co-authored-by: neverlee <neverlea@formail.com>
2025-03-27 23:22:08 +08:00
MadDogOwner
5668e4a4ea
feat(doubao): add Doubao driver (#8232 closes #8020 #8206)
* feat(doubao): implement List()

* feat(doubao): implement Link()

* feat(doubao): implement MakeDir()

* refactor(doubao): add type Object to store key

* feat(doubao): implement Move()

* feat(doubao): implement Rename()

* feat(doubao): implement Remove()
2025-03-27 23:21:42 +08:00
KirCute
1335f80362
feat(archive): support multipart archives (#8184 close #8015)
* feat(archive): multipart support & sevenzip tool

* feat(archive): rardecode tool

* feat(archive): support decompress multi-selected

* fix(archive): decompress response filter internal

* feat(archive): support multipart zip

* fix: more applicable AcceptedMultipartExtensions interface
2025-03-27 23:20:44 +08:00
KirCute
704d3854df
feat(alist_v3): support forward archive requests (#8230)
* feat(alist_v3): support forward archive requests

* fix: encode all inner path
2025-03-27 23:18:34 +08:00
MadDogOwner
44cc71d354
fix(cloudreve): enable SetContentLength for uploading to local policy (#8228 close #8174)
* fix(cloudreve): upload failure to return error msg instead of deletion success

* fix(cloudreve): enable SetContentLength for uploading to local policy

* refactor(cloudreve): move local policy upload logic to utils for better error handling

* refactor(cloudreve): unified upload code style

* refactor(cloudreve): improve user agent handling
2025-03-27 23:18:15 +08:00
KirCute
9a9aee9ac6
feat(alias): support writing to non-ambiguous paths (#8216)
* feat(alias): support writing to non-ambiguous paths

* feat(alias): support extract concurrency

* fix(alias): extract url no pass query
2025-03-27 23:17:45 +08:00
KirCute
4fcc3a187e
fix(traffic): duplicate semaphore release when uploading (#8211 close #8180) 2025-03-27 23:15:47 +08:00
Ljcbaby
10a76c701d
fix(db): support postgres trust/peer mode (#8198 close #8066) 2025-03-27 23:15:04 +08:00
KirCute
6e13923225
fix(sftp-server): postgre cannot store control characters (#8188 close #8186) 2025-03-27 23:14:36 +08:00
Andy Hsu
32890da29f fix(115_open): upgrade 115-sdk-go dependency to v0.1.4 2025-03-21 19:06:09 +08:00
Andy Hsu
758554a40f fix(115_open): upgrade 115-sdk-go dependency to v0.1.3 (close #8169) 2025-03-19 21:47:42 +08:00
Andy Hsu
4563aea47e fix(115_open): rename delay to take effect (close #8156) 2025-03-18 22:25:04 +08:00
Andy Hsu
35d6f3b8fc fix(115_open): upgrade sdk (close #8151) 2025-03-18 22:21:50 +08:00
j2rong4cn
b4e6ab12d9
refactor: FilterReadMeScripts (#8154 close #8150)
* refactor: FilterReadMeScripts

* .
2025-03-18 22:02:33 +08:00
Andy Hsu
3499c4db87
feat: 115 open driver (#8139)
* wip: 115 open

* chore(go.mod): update 115-sdk-go dependency version

* feat(115_open): implement directory management and file operations

* chore(go.mod): update 115-sdk-go dependency to v0.1.1 and adjust callback handling in driver

* chore: rename driver
2025-03-17 00:52:09 +08:00
hshpy
d20f41d687
fix: missing handling of RangeReadCloser (#8146) 2025-03-16 22:14:44 +08:00
Andy Hsu
d16ba65f42 fix(lang): initialize configuration in LangCmd before generating language JSON file 2025-03-16 16:37:33 +08:00
127 changed files with 8608 additions and 1209 deletions

View File

@ -32,10 +32,9 @@ RUN apk update && \
/opt/aria2/.aria2/tracker.sh ; \
rm -rf /var/cache/apk/*
COPY --from=builder /app/bin/alist ./
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /opt/alist/alist && \
chmod +x /entrypoint.sh && /entrypoint.sh version
COPY --chmod=755 --from=builder /app/bin/alist ./
COPY --chmod=755 entrypoint.sh /entrypoint.sh
RUN /entrypoint.sh version
ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
VOLUME /opt/alist/data/

View File

@ -24,10 +24,9 @@ RUN apk update && \
/opt/aria2/.aria2/tracker.sh ; \
rm -rf /var/cache/apk/*
COPY /build/${TARGETPLATFORM}/alist ./
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /opt/alist/alist && \
chmod +x /entrypoint.sh && /entrypoint.sh version
COPY --chmod=755 /build/${TARGETPLATFORM}/alist ./
COPY --chmod=755 entrypoint.sh /entrypoint.sh
RUN /entrypoint.sh version
ENV PUID=0 PGID=0 UMASK=022 RUN_ARIA2=${INSTALL_ARIA2}
VOLUME /opt/alist/data/

View File

@ -77,6 +77,7 @@ English | [中文](./README_cn.md) | [日本語](./README_ja.md) | [Contributing
- [x] [Dropbox](https://www.dropbox.com/)
- [x] [FeijiPan](https://www.feijipan.com/)
- [x] [dogecloud](https://www.dogecloud.com/product/oss)
- [x] [Azure Blob Storage](https://azure.microsoft.com/products/storage/blobs)
- [x] Easy to deploy and out-of-the-box
- [x] File preview (PDF, markdown, code, plain text, ...)
- [x] Image preview in gallery mode

View File

@ -12,6 +12,7 @@ import (
"strings"
_ "github.com/alist-org/alist/v3/drivers"
"github.com/alist-org/alist/v3/internal/bootstrap"
"github.com/alist-org/alist/v3/internal/bootstrap/data"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/op"
@ -137,6 +138,7 @@ var LangCmd = &cobra.Command{
Use: "lang",
Short: "Generate language json file",
Run: func(cmd *cobra.Command, args []string) {
bootstrap.InitConfig()
err := os.MkdirAll("lang", 0777)
if err != nil {
utils.Log.Fatalf("failed create folder: %s", err.Error())

View File

@ -4,9 +4,6 @@ import (
"context"
"errors"
"fmt"
ftpserver "github.com/KirCute/ftpserverlib-pasvportmap"
"github.com/KirCute/sftpd-alist"
"github.com/alist-org/alist/v3/internal/fs"
"net"
"net/http"
"os"
@ -16,14 +13,19 @@ import (
"syscall"
"time"
ftpserver "github.com/KirCute/ftpserverlib-pasvportmap"
"github.com/KirCute/sftpd-alist"
"github.com/alist-org/alist/v3/cmd/flags"
"github.com/alist-org/alist/v3/internal/bootstrap"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/fs"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/server"
"github.com/gin-gonic/gin"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"golang.org/x/net/http2"
"golang.org/x/net/http2/h2c"
)
// ServerCmd represents the server command
@ -47,11 +49,15 @@ the address is defined in config file`,
r := gin.New()
r.Use(gin.LoggerWithWriter(log.StandardLogger().Out), gin.RecoveryWithWriter(log.StandardLogger().Out))
server.Init(r)
var httpHandler http.Handler = r
if conf.Conf.Scheme.EnableH2c {
httpHandler = h2c.NewHandler(r, &http2.Server{})
}
var httpSrv, httpsSrv, unixSrv *http.Server
if conf.Conf.Scheme.HttpPort != -1 {
httpBase := fmt.Sprintf("%s:%d", conf.Conf.Scheme.Address, conf.Conf.Scheme.HttpPort)
utils.Log.Infof("start HTTP server @ %s", httpBase)
httpSrv = &http.Server{Addr: httpBase, Handler: r}
httpSrv = &http.Server{Addr: httpBase, Handler: httpHandler}
go func() {
err := httpSrv.ListenAndServe()
if err != nil && !errors.Is(err, http.ErrServerClosed) {
@ -72,7 +78,7 @@ the address is defined in config file`,
}
if conf.Conf.Scheme.UnixFile != "" {
utils.Log.Infof("start unix server @ %s", conf.Conf.Scheme.UnixFile)
unixSrv = &http.Server{Handler: r}
unixSrv = &http.Server{Handler: httpHandler}
go func() {
listener, err := net.Listen("unix", conf.Conf.Scheme.UnixFile)
if err != nil {

View File

@ -405,7 +405,7 @@ func (d *Pan115) UploadByMultipart(ctx context.Context, params *driver115.Upload
if _, err = tmpF.ReadAt(buf, chunk.Offset); err != nil && !errors.Is(err, io.EOF) {
continue
}
if part, err = bucket.UploadPart(imur, driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(buf)),
if part, err = bucket.UploadPart(imur, driver.NewLimitedUploadStream(ctx, bytes.NewReader(buf)),
chunk.Size, chunk.Number, driver115.OssOption(params, ossToken)...); err == nil {
break
}

335
drivers/115_open/driver.go Normal file
View File

@ -0,0 +1,335 @@
package _115_open
import (
"context"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/cmd/flags"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
sdk "github.com/xhofe/115-sdk-go"
"golang.org/x/time/rate"
)
type Open115 struct {
model.Storage
Addition
client *sdk.Client
limiter *rate.Limiter
}
func (d *Open115) Config() driver.Config {
return config
}
func (d *Open115) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Open115) Init(ctx context.Context) error {
d.client = sdk.New(sdk.WithRefreshToken(d.Addition.RefreshToken),
sdk.WithAccessToken(d.Addition.AccessToken),
sdk.WithOnRefreshToken(func(s1, s2 string) {
d.Addition.AccessToken = s1
d.Addition.RefreshToken = s2
op.MustSaveDriverStorage(d)
}))
if flags.Debug || flags.Dev {
d.client.SetDebug(true)
}
_, err := d.client.UserInfo(ctx)
if err != nil {
return err
}
if d.Addition.LimitRate > 0 {
d.limiter = rate.NewLimiter(rate.Limit(d.Addition.LimitRate), 1)
}
return nil
}
func (d *Open115) WaitLimit(ctx context.Context) error {
if d.limiter != nil {
return d.limiter.Wait(ctx)
}
return nil
}
func (d *Open115) Drop(ctx context.Context) error {
return nil
}
func (d *Open115) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var res []model.Obj
pageSize := int64(200)
offset := int64(0)
for {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
resp, err := d.client.GetFiles(ctx, &sdk.GetFilesReq{
CID: dir.GetID(),
Limit: pageSize,
Offset: offset,
ASC: d.Addition.OrderDirection == "asc",
O: d.Addition.OrderBy,
// Cur: 1,
ShowDir: true,
})
if err != nil {
return nil, err
}
res = append(res, utils.MustSliceConvert(resp.Data, func(src sdk.GetFilesResp_File) model.Obj {
obj := Obj(src)
return &obj
})...)
if len(res) >= int(resp.Count) {
break
}
offset += pageSize
}
return res, nil
}
func (d *Open115) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
var ua string
if args.Header != nil {
ua = args.Header.Get("User-Agent")
}
if ua == "" {
ua = base.UserAgent
}
obj, ok := file.(*Obj)
if !ok {
return nil, fmt.Errorf("can't convert obj")
}
pc := obj.Pc
resp, err := d.client.DownURL(ctx, pc, ua)
if err != nil {
return nil, err
}
u, ok := resp[obj.GetID()]
if !ok {
return nil, fmt.Errorf("can't get link")
}
return &model.Link{
URL: u.URL.URL,
Header: http.Header{
"User-Agent": []string{ua},
},
}, nil
}
func (d *Open115) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
resp, err := d.client.Mkdir(ctx, parentDir.GetID(), dirName)
if err != nil {
return nil, err
}
return &Obj{
Fid: resp.FileID,
Pid: parentDir.GetID(),
Fn: dirName,
Fc: "0",
Upt: time.Now().Unix(),
Uet: time.Now().Unix(),
UpPt: time.Now().Unix(),
}, nil
}
func (d *Open115) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
_, err := d.client.Move(ctx, &sdk.MoveReq{
FileIDs: srcObj.GetID(),
ToCid: dstDir.GetID(),
})
if err != nil {
return nil, err
}
return srcObj, nil
}
func (d *Open115) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
_, err := d.client.UpdateFile(ctx, &sdk.UpdateFileReq{
FileID: srcObj.GetID(),
FileNma: newName,
})
if err != nil {
return nil, err
}
obj, ok := srcObj.(*Obj)
if ok {
obj.Fn = newName
}
return srcObj, nil
}
func (d *Open115) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if err := d.WaitLimit(ctx); err != nil {
return nil, err
}
_, err := d.client.Copy(ctx, &sdk.CopyReq{
PID: dstDir.GetID(),
FileID: srcObj.GetID(),
NoDupli: "1",
})
if err != nil {
return nil, err
}
return srcObj, nil
}
func (d *Open115) Remove(ctx context.Context, obj model.Obj) error {
if err := d.WaitLimit(ctx); err != nil {
return err
}
_obj, ok := obj.(*Obj)
if !ok {
return fmt.Errorf("can't convert obj")
}
_, err := d.client.DelFile(ctx, &sdk.DelFileReq{
FileIDs: _obj.GetID(),
ParentID: _obj.Pid,
})
if err != nil {
return err
}
return nil
}
func (d *Open115) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
if err := d.WaitLimit(ctx); err != nil {
return err
}
tempF, err := file.CacheFullInTempFile()
if err != nil {
return err
}
// cal full sha1
sha1, err := utils.HashReader(utils.SHA1, tempF)
if err != nil {
return err
}
_, err = tempF.Seek(0, io.SeekStart)
if err != nil {
return err
}
// pre 128k sha1
sha1128k, err := utils.HashReader(utils.SHA1, io.LimitReader(tempF, 128*1024))
if err != nil {
return err
}
_, err = tempF.Seek(0, io.SeekStart)
if err != nil {
return err
}
// 1. Init
resp, err := d.client.UploadInit(ctx, &sdk.UploadInitReq{
FileName: file.GetName(),
FileSize: file.GetSize(),
Target: dstDir.GetID(),
FileID: strings.ToUpper(sha1),
PreID: strings.ToUpper(sha1128k),
})
if err != nil {
return err
}
if resp.Status == 2 {
return nil
}
// 2. two way verify
if utils.SliceContains([]int{6, 7, 8}, resp.Status) {
signCheck := strings.Split(resp.SignCheck, "-") //"sign_check": "2392148-2392298" 取2392148-2392298之间的内容(包含2392148、2392298)的sha1
start, err := strconv.ParseInt(signCheck[0], 10, 64)
if err != nil {
return err
}
end, err := strconv.ParseInt(signCheck[1], 10, 64)
if err != nil {
return err
}
_, err = tempF.Seek(start, io.SeekStart)
if err != nil {
return err
}
signVal, err := utils.HashReader(utils.SHA1, io.LimitReader(tempF, end-start+1))
if err != nil {
return err
}
_, err = tempF.Seek(0, io.SeekStart)
if err != nil {
return err
}
resp, err = d.client.UploadInit(ctx, &sdk.UploadInitReq{
FileName: file.GetName(),
FileSize: file.GetSize(),
Target: dstDir.GetID(),
FileID: strings.ToUpper(sha1),
PreID: strings.ToUpper(sha1128k),
SignKey: resp.SignKey,
SignVal: strings.ToUpper(signVal),
})
if err != nil {
return err
}
if resp.Status == 2 {
return nil
}
}
// 3. get upload token
tokenResp, err := d.client.UploadGetToken(ctx)
if err != nil {
return err
}
// 4. upload
err = d.multpartUpload(ctx, tempF, file, up, tokenResp, resp)
if err != nil {
return err
}
return nil
}
// func (d *Open115) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// // TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
// return nil, errs.NotImplement
// }
// func (d *Open115) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// // TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
// return nil, errs.NotImplement
// }
// func (d *Open115) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// // TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
// return nil, errs.NotImplement
// }
// func (d *Open115) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// // TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// // a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// // return errs.NotImplement to use an internal archive tool
// return nil, errs.NotImplement
// }
//func (d *Template) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Open115)(nil)

37
drivers/115_open/meta.go Normal file
View File

@ -0,0 +1,37 @@
package _115_open
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
driver.RootID
// define other
RefreshToken string `json:"refresh_token" required:"true"`
OrderBy string `json:"order_by" type:"select" options:"file_name,file_size,user_utime,file_type"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc"`
LimitRate float64 `json:"limit_rate" type:"float" default:"1" help:"limit all api request rate ([limit]r/1s)"`
AccessToken string
}
var config = driver.Config{
Name: "115 Open",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Open115{}
})
}

59
drivers/115_open/types.go Normal file
View File

@ -0,0 +1,59 @@
package _115_open
import (
"time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
sdk "github.com/xhofe/115-sdk-go"
)
type Obj sdk.GetFilesResp_File
// Thumb implements model.Thumb.
func (o *Obj) Thumb() string {
return o.Thumbnail
}
// CreateTime implements model.Obj.
func (o *Obj) CreateTime() time.Time {
return time.Unix(o.UpPt, 0)
}
// GetHash implements model.Obj.
func (o *Obj) GetHash() utils.HashInfo {
return utils.NewHashInfo(utils.SHA1, o.Sha1)
}
// GetID implements model.Obj.
func (o *Obj) GetID() string {
return o.Fid
}
// GetName implements model.Obj.
func (o *Obj) GetName() string {
return o.Fn
}
// GetPath implements model.Obj.
func (o *Obj) GetPath() string {
return ""
}
// GetSize implements model.Obj.
func (o *Obj) GetSize() int64 {
return o.FS
}
// IsDir implements model.Obj.
func (o *Obj) IsDir() bool {
return o.Fc == "0"
}
// ModTime implements model.Obj.
func (o *Obj) ModTime() time.Time {
return time.Unix(o.Upt, 0)
}
var _ model.Obj = (*Obj)(nil)
var _ model.Thumb = (*Obj)(nil)

140
drivers/115_open/upload.go Normal file
View File

@ -0,0 +1,140 @@
package _115_open
import (
"context"
"encoding/base64"
"io"
"time"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/aliyun/aliyun-oss-go-sdk/oss"
"github.com/avast/retry-go"
sdk "github.com/xhofe/115-sdk-go"
)
func calPartSize(fileSize int64) int64 {
var partSize int64 = 20 * utils.MB
if fileSize > partSize {
if fileSize > 1*utils.TB { // file Size over 1TB
partSize = 5 * utils.GB // file part size 5GB
} else if fileSize > 768*utils.GB { // over 768GB
partSize = 109951163 // ≈ 104.8576MB, split 1TB into 10,000 part
} else if fileSize > 512*utils.GB { // over 512GB
partSize = 82463373 // ≈ 78.6432MB
} else if fileSize > 384*utils.GB { // over 384GB
partSize = 54975582 // ≈ 52.4288MB
} else if fileSize > 256*utils.GB { // over 256GB
partSize = 41231687 // ≈ 39.3216MB
} else if fileSize > 128*utils.GB { // over 128GB
partSize = 27487791 // ≈ 26.2144MB
}
}
return partSize
}
func (d *Open115) singleUpload(ctx context.Context, tempF model.File, tokenResp *sdk.UploadGetTokenResp, initResp *sdk.UploadInitResp) error {
ossClient, err := oss.New(tokenResp.Endpoint, tokenResp.AccessKeyId, tokenResp.AccessKeySecret, oss.SecurityToken(tokenResp.SecurityToken))
if err != nil {
return err
}
bucket, err := ossClient.Bucket(initResp.Bucket)
if err != nil {
return err
}
err = bucket.PutObject(initResp.Object, tempF,
oss.Callback(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.Callback))),
oss.CallbackVar(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.CallbackVar))),
)
return err
}
// type CallbackResult struct {
// State bool `json:"state"`
// Code int `json:"code"`
// Message string `json:"message"`
// Data struct {
// PickCode string `json:"pick_code"`
// FileName string `json:"file_name"`
// FileSize int64 `json:"file_size"`
// FileID string `json:"file_id"`
// ThumbURL string `json:"thumb_url"`
// Sha1 string `json:"sha1"`
// Aid int `json:"aid"`
// Cid string `json:"cid"`
// } `json:"data"`
// }
func (d *Open115) multpartUpload(ctx context.Context, tempF model.File, stream model.FileStreamer, up driver.UpdateProgress, tokenResp *sdk.UploadGetTokenResp, initResp *sdk.UploadInitResp) error {
fileSize := stream.GetSize()
chunkSize := calPartSize(fileSize)
ossClient, err := oss.New(tokenResp.Endpoint, tokenResp.AccessKeyId, tokenResp.AccessKeySecret, oss.SecurityToken(tokenResp.SecurityToken))
if err != nil {
return err
}
bucket, err := ossClient.Bucket(initResp.Bucket)
if err != nil {
return err
}
imur, err := bucket.InitiateMultipartUpload(initResp.Object, oss.Sequential())
if err != nil {
return err
}
partNum := (stream.GetSize() + chunkSize - 1) / chunkSize
parts := make([]oss.UploadPart, partNum)
offset := int64(0)
for i := int64(1); i <= partNum; i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
partSize := chunkSize
if i == partNum {
partSize = fileSize - (i-1)*chunkSize
}
rd := utils.NewMultiReadable(io.LimitReader(stream, partSize))
err = retry.Do(func() error {
_ = rd.Reset()
rateLimitedRd := driver.NewLimitedUploadStream(ctx, rd)
part, err := bucket.UploadPart(imur, rateLimitedRd, partSize, int(i))
if err != nil {
return err
}
parts[i-1] = part
return nil
},
retry.Attempts(3),
retry.DelayType(retry.BackOffDelay),
retry.Delay(time.Second))
if err != nil {
return err
}
if i == partNum {
offset = fileSize
} else {
offset += partSize
}
up(float64(offset) / float64(fileSize))
}
// callbackRespBytes := make([]byte, 1024)
_, err = bucket.CompleteMultipartUpload(
imur,
parts,
oss.Callback(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.Callback))),
oss.CallbackVar(base64.StdEncoding.EncodeToString([]byte(initResp.Callback.Value.CallbackVar))),
// oss.CallbackResult(&callbackRespBytes),
)
if err != nil {
return err
}
return nil
}

3
drivers/115_open/util.go Normal file
View File

@ -0,0 +1,3 @@
package _115_open
// do others that not defined in Driver interface

View File

@ -2,11 +2,8 @@ package _123
import (
"context"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"fmt"
"io"
"net/http"
"net/url"
"sync"
@ -18,6 +15,7 @@ import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
@ -187,25 +185,12 @@ func (d *Pan123) Remove(ctx context.Context, obj model.Obj) error {
func (d *Pan123) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
etag := file.GetHash().GetHash(utils.MD5)
var err error
if len(etag) < utils.MD5.Width {
// const DEFAULT int64 = 10485760
h := md5.New()
// need to calculate md5 of the full content
tempFile, err := file.CacheFullInTempFile()
_, etag, err = stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil {
return err
}
defer func() {
_ = tempFile.Close()
}()
if _, err = utils.CopyWithBuffer(h, tempFile); err != nil {
return err
}
_, err = tempFile.Seek(0, io.SeekStart)
if err != nil {
return err
}
etag = hex.EncodeToString(h.Sum(nil))
}
data := base.Json{
"driveId": 0,

View File

@ -4,7 +4,6 @@ import (
"context"
"fmt"
"io"
"math"
"net/http"
"strconv"
@ -70,27 +69,33 @@ func (d *Pan123) completeS3(ctx context.Context, upReq *UploadResp, file model.F
}
func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.FileStreamer, up driver.UpdateProgress) error {
chunkSize := int64(1024 * 1024 * 16)
tmpF, err := file.CacheFullInTempFile()
if err != nil {
return err
}
// fetch s3 pre signed urls
chunkCount := int(math.Ceil(float64(file.GetSize()) / float64(chunkSize)))
size := file.GetSize()
chunkSize := min(size, 16*utils.MB)
chunkCount := int(size / chunkSize)
lastChunkSize := size % chunkSize
if lastChunkSize > 0 {
chunkCount++
} else {
lastChunkSize = chunkSize
}
// only 1 batch is allowed
isMultipart := chunkCount > 1
batchSize := 1
getS3UploadUrl := d.getS3Auth
if isMultipart {
if chunkCount > 1 {
batchSize = 10
getS3UploadUrl = d.getS3PreSignedUrls
}
limited := driver.NewLimitedUploadStream(ctx, file)
for i := 1; i <= chunkCount; i += batchSize {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
start := i
end := i + batchSize
if end > chunkCount+1 {
end = chunkCount + 1
}
end := min(i+batchSize, chunkCount+1)
s3PreSignedUrls, err := getS3UploadUrl(ctx, upReq, start, end)
if err != nil {
return err
@ -102,9 +107,9 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
}
curSize := chunkSize
if j == chunkCount {
curSize = file.GetSize() - (int64(chunkCount)-1)*chunkSize
curSize = lastChunkSize
}
err = d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, j, end, io.LimitReader(limited, chunkSize), curSize, false, getS3UploadUrl)
err = d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, j, end, io.NewSectionReader(tmpF, chunkSize*int64(j-1), curSize), curSize, false, getS3UploadUrl)
if err != nil {
return err
}
@ -115,12 +120,12 @@ func (d *Pan123) newUpload(ctx context.Context, upReq *UploadResp, file model.Fi
return d.completeS3(ctx, upReq, file, chunkCount > 1)
}
func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSignedUrls *S3PreSignedURLs, cur, end int, reader io.Reader, curSize int64, retry bool, getS3UploadUrl func(ctx context.Context, upReq *UploadResp, start int, end int) (*S3PreSignedURLs, error)) error {
func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSignedUrls *S3PreSignedURLs, cur, end int, reader *io.SectionReader, curSize int64, retry bool, getS3UploadUrl func(ctx context.Context, upReq *UploadResp, start int, end int) (*S3PreSignedURLs, error)) error {
uploadUrl := s3PreSignedUrls.Data.PreSignedUrls[strconv.Itoa(cur)]
if uploadUrl == "" {
return fmt.Errorf("upload url is empty, s3PreSignedUrls: %+v", s3PreSignedUrls)
}
req, err := http.NewRequest("PUT", uploadUrl, reader)
req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, reader))
if err != nil {
return err
}
@ -143,6 +148,7 @@ func (d *Pan123) uploadS3Chunk(ctx context.Context, upReq *UploadResp, s3PreSign
}
s3PreSignedUrls.Data.PreSignedUrls = newS3PreSignedUrls.Data.PreSignedUrls
// retry
reader.Seek(0, io.SeekStart)
return d.uploadS3Chunk(ctx, upReq, s3PreSignedUrls, cur, end, reader, curSize, true, getS3UploadUrl)
}
if res.StatusCode != http.StatusOK {

View File

@ -2,19 +2,19 @@ package _139
import (
"context"
"encoding/base64"
"encoding/xml"
"fmt"
"io"
"net/http"
"path"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
streamPkg "github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/cron"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/pkg/utils/random"
@ -24,9 +24,10 @@ import (
type Yun139 struct {
model.Storage
Addition
cron *cron.Cron
Account string
ref *Yun139
cron *cron.Cron
Account string
ref *Yun139
PersonalCloudHost string
}
func (d *Yun139) Config() driver.Config {
@ -39,13 +40,36 @@ func (d *Yun139) GetAddition() driver.Additional {
func (d *Yun139) Init(ctx context.Context) error {
if d.ref == nil {
if d.Authorization == "" {
if len(d.Authorization) == 0 {
return fmt.Errorf("authorization is empty")
}
err := d.refreshToken()
if err != nil {
return err
}
// Query Route Policy
var resp QueryRoutePolicyResp
_, err = d.requestRoute(base.Json{
"userInfo": base.Json{
"userType": 1,
"accountType": 1,
"accountName": d.Account},
"modAddrType": 1,
}, &resp)
if err != nil {
return err
}
for _, policyItem := range resp.Data.RoutePolicyList {
if policyItem.ModName == "personal" {
d.PersonalCloudHost = policyItem.HttpsUrl
break
}
}
if len(d.PersonalCloudHost) == 0 {
return fmt.Errorf("PersonalCloudHost is empty")
}
d.cron = cron.NewCron(time.Hour * 12)
d.cron.Do(func() {
err := d.refreshToken()
@ -71,28 +95,7 @@ func (d *Yun139) Init(ctx context.Context) error {
default:
return errs.NotImplement
}
if d.ref != nil {
return nil
}
decode, err := base64.StdEncoding.DecodeString(d.Authorization)
if err != nil {
return err
}
decodeStr := string(decode)
splits := strings.Split(decodeStr, ":")
if len(splits) < 2 {
return fmt.Errorf("authorization is invalid, splits < 2")
}
d.Account = splits[1]
_, err = d.post("/orchestration/personalCloud/user/v1.0/qryUserExternInfo", base.Json{
"qryUserExternInfoReq": base.Json{
"commonAccountInfo": base.Json{
"account": d.getAccount(),
"accountType": 1,
},
},
}, nil)
return err
return nil
}
func (d *Yun139) InitReference(storage driver.Driver) error {
@ -159,7 +162,7 @@ func (d *Yun139) MakeDir(ctx context.Context, parentDir model.Obj, dirName strin
"type": "folder",
"fileRenameMode": "force_rename",
}
pathname := "/hcy/file/create"
pathname := "/file/create"
_, err = d.personalPost(pathname, data, nil)
case MetaPersonal:
data := base.Json{
@ -212,7 +215,7 @@ func (d *Yun139) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj,
"fileIds": []string{srcObj.GetID()},
"toParentFileId": dstDir.GetID(),
}
pathname := "/hcy/file/batchMove"
pathname := "/file/batchMove"
_, err := d.personalPost(pathname, data, nil)
if err != nil {
return nil, err
@ -289,7 +292,7 @@ func (d *Yun139) Rename(ctx context.Context, srcObj model.Obj, newName string) e
"name": newName,
"description": "",
}
pathname := "/hcy/file/update"
pathname := "/file/update"
_, err = d.personalPost(pathname, data, nil)
case MetaPersonal:
var data base.Json
@ -389,7 +392,7 @@ func (d *Yun139) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
"fileIds": []string{srcObj.GetID()},
"toParentFileId": dstDir.GetID(),
}
pathname := "/hcy/file/batchCopy"
pathname := "/file/batchCopy"
_, err := d.personalPost(pathname, data, nil)
return err
case MetaPersonal:
@ -429,7 +432,7 @@ func (d *Yun139) Remove(ctx context.Context, obj model.Obj) error {
data := base.Json{
"fileIds": []string{obj.GetID()},
}
pathname := "/hcy/recyclebin/batchTrash"
pathname := "/recyclebin/batchTrash"
_, err := d.personalPost(pathname, data, nil)
return err
case MetaGroup:
@ -502,23 +505,15 @@ func (d *Yun139) Remove(ctx context.Context, obj model.Obj) error {
}
}
const (
_ = iota //ignore first value by assigning to blank identifier
KB = 1 << (10 * iota)
MB
GB
TB
)
func (d *Yun139) getPartSize(size int64) int64 {
if d.CustomUploadPartSize != 0 {
return d.CustomUploadPartSize
}
// 网盘对于分片数量存在上限
if size/GB > 30 {
return 512 * MB
if size/utils.GB > 30 {
return 512 * utils.MB
}
return 100 * MB
return 100 * utils.MB
}
func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
@ -526,29 +521,28 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
case MetaPersonalNew:
var err error
fullHash := stream.GetHash().GetHash(utils.SHA256)
if len(fullHash) <= 0 {
tmpF, err := stream.CacheFullInTempFile()
if err != nil {
return err
}
fullHash, err = utils.HashFile(utils.SHA256, tmpF)
if len(fullHash) != utils.SHA256.Width {
_, fullHash, err = streamPkg.CacheFullInTempFileAndHash(stream, utils.SHA256)
if err != nil {
return err
}
}
partInfos := []PartInfo{}
var partSize = d.getPartSize(stream.GetSize())
part := (stream.GetSize() + partSize - 1) / partSize
if part == 0 {
size := stream.GetSize()
var partSize = d.getPartSize(size)
part := size / partSize
if size%partSize > 0 {
part++
} else if part == 0 {
part = 1
}
partInfos := make([]PartInfo, 0, part)
for i := int64(0); i < part; i++ {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
start := i * partSize
byteSize := stream.GetSize() - start
byteSize := size - start
if byteSize > partSize {
byteSize = partSize
}
@ -576,13 +570,13 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
"contentType": "application/octet-stream",
"parallelUpload": false,
"partInfos": firstPartInfos,
"size": stream.GetSize(),
"size": size,
"parentFileId": dstDir.GetID(),
"name": stream.GetName(),
"type": "file",
"fileRenameMode": "auto_rename",
}
pathname := "/hcy/file/create"
pathname := "/file/create"
var resp PersonalUploadResp
_, err = d.personalPost(pathname, data, &resp)
if err != nil {
@ -619,7 +613,7 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
"accountType": 1,
},
}
pathname := "/hcy/file/getUploadUrl"
pathname := "/file/getUploadUrl"
var moreresp PersonalUploadUrlResp
_, err = d.personalPost(pathname, moredata, &moreresp)
if err != nil {
@ -629,7 +623,7 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
}
// Progress
p := driver.NewProgress(stream.GetSize(), up)
p := driver.NewProgress(size, up)
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
// 上传所有分片
@ -670,7 +664,7 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
"fileId": resp.Data.FileId,
"uploadId": resp.Data.UploadId,
}
_, err = d.personalPost("/hcy/file/complete", data, nil)
_, err = d.personalPost("/file/complete", data, nil)
if err != nil {
return err
}
@ -740,14 +734,20 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
break
}
}
var reportSize int64
if d.ReportRealSize {
reportSize = stream.GetSize()
} else {
reportSize = 0
}
data := base.Json{
"manualRename": 2,
"operation": 0,
"fileCount": 1,
"totalSize": 0, // 去除上传大小限制
"totalSize": reportSize,
"uploadContentList": []base.Json{{
"contentName": stream.GetName(),
"contentSize": 0, // 去除上传大小限制
"contentSize": reportSize,
// "digest": "5a3231986ce7a6b46e408612d385bafa"
}},
"parentCatalogID": dstDir.GetID(),
@ -765,10 +765,10 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
"operation": 0,
"path": path.Join(dstDir.GetPath(), dstDir.GetID()),
"seqNo": random.String(32), //序列号不能为空
"totalSize": 0,
"totalSize": reportSize,
"uploadContentList": []base.Json{{
"contentName": stream.GetName(),
"contentSize": 0,
"contentSize": reportSize,
// "digest": "5a3231986ce7a6b46e408612d385bafa"
}},
})
@ -779,13 +779,18 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
if err != nil {
return err
}
if resp.Data.Result.ResultCode != "0" {
return fmt.Errorf("get file upload url failed with result code: %s, message: %s", resp.Data.Result.ResultCode, resp.Data.Result.ResultDesc)
}
size := stream.GetSize()
// Progress
p := driver.NewProgress(stream.GetSize(), up)
var partSize = d.getPartSize(stream.GetSize())
part := (stream.GetSize() + partSize - 1) / partSize
if part == 0 {
p := driver.NewProgress(size, up)
var partSize = d.getPartSize(size)
part := size / partSize
if size%partSize > 0 {
part++
} else if part == 0 {
part = 1
}
rateLimited := driver.NewLimitedUploadStream(ctx, stream)
@ -795,10 +800,7 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
}
start := i * partSize
byteSize := stream.GetSize() - start
if byteSize > partSize {
byteSize = partSize
}
byteSize := min(size-start, partSize)
limitReader := io.LimitReader(rateLimited, byteSize)
// Update Progress
@ -810,7 +812,7 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
req = req.WithContext(ctx)
req.Header.Set("Content-Type", "text/plain;name="+unicode(stream.GetName()))
req.Header.Set("contentSize", strconv.FormatInt(stream.GetSize(), 10))
req.Header.Set("contentSize", strconv.FormatInt(size, 10))
req.Header.Set("range", fmt.Sprintf("bytes=%d-%d", start, start+byteSize-1))
req.Header.Set("uploadtaskID", resp.Data.UploadResult.UploadTaskID)
req.Header.Set("rangeType", "0")
@ -820,13 +822,23 @@ func (d *Yun139) Put(ctx context.Context, dstDir model.Obj, stream model.FileStr
if err != nil {
return err
}
_ = res.Body.Close()
log.Debugf("%+v", res)
if res.StatusCode != http.StatusOK {
res.Body.Close()
return fmt.Errorf("unexpected status code: %d", res.StatusCode)
}
bodyBytes, err := io.ReadAll(res.Body)
if err != nil {
return fmt.Errorf("error reading response body: %v", err)
}
var result InterLayerUploadResult
err = xml.Unmarshal(bodyBytes, &result)
if err != nil {
return fmt.Errorf("error parsing XML: %v", err)
}
if result.ResultCode != 0 {
return fmt.Errorf("upload failed with result code: %d, message: %s", result.ResultCode, result.Msg)
}
}
return nil
default:
return errs.NotImplement
@ -844,7 +856,7 @@ func (d *Yun139) Other(ctx context.Context, args model.OtherArgs) (interface{},
}
switch args.Method {
case "video_preview":
uri = "/hcy/videoPreview/getPreviewInfo"
uri = "/videoPreview/getPreviewInfo"
default:
return nil, errs.NotSupport
}

View File

@ -12,6 +12,8 @@ type Addition struct {
Type string `json:"type" type:"select" options:"personal_new,family,group,personal" default:"personal_new"`
CloudID string `json:"cloud_id"`
CustomUploadPartSize int64 `json:"custom_upload_part_size" type:"number" default:"0" help:"0 for auto"`
ReportRealSize bool `json:"report_real_size" type:"bool" default:"true" help:"Enable to report the real file size during upload"`
UseLargeThumbnail bool `json:"use_large_thumbnail" type:"bool" default:"false" help:"Enable to use large thumbnail for images"`
}
var config = driver.Config{

View File

@ -143,6 +143,13 @@ type UploadResp struct {
} `json:"data"`
}
type InterLayerUploadResult struct {
XMLName xml.Name `xml:"result"`
Text string `xml:",chardata"`
ResultCode int `xml:"resultCode"`
Msg string `xml:"msg"`
}
type CloudContent struct {
ContentID string `json:"contentID"`
//Modifier string `json:"modifier"`
@ -278,11 +285,30 @@ type PersonalUploadUrlResp struct {
}
}
type RefreshTokenResp struct {
XMLName xml.Name `xml:"root"`
Return string `xml:"return"`
Token string `xml:"token"`
Expiretime int32 `xml:"expiretime"`
AccessToken string `xml:"accessToken"`
Desc string `xml:"desc"`
type QueryRoutePolicyResp struct {
Success bool `json:"success"`
Code string `json:"code"`
Message string `json:"message"`
Data struct {
RoutePolicyList []struct {
SiteID string `json:"siteID"`
SiteCode string `json:"siteCode"`
ModName string `json:"modName"`
HttpUrl string `json:"httpUrl"`
HttpsUrl string `json:"httpsUrl"`
EnvID string `json:"envID"`
ExtInfo string `json:"extInfo"`
HashName string `json:"hashName"`
ModAddrType int `json:"modAddrType"`
} `json:"routePolicyList"`
} `json:"data"`
}
type RefreshTokenResp struct {
XMLName xml.Name `xml:"root"`
Return string `xml:"return"`
Token string `xml:"token"`
Expiretime int32 `xml:"expiretime"`
AccessToken string `xml:"accessToken"`
Desc string `xml:"desc"`
}

View File

@ -67,6 +67,7 @@ func (d *Yun139) refreshToken() error {
if len(splits) < 3 {
return fmt.Errorf("authorization is invalid, splits < 3")
}
d.Account = splits[1]
strs := strings.Split(splits[2], "|")
if len(strs) < 4 {
return fmt.Errorf("authorization is invalid, strs < 4")
@ -156,6 +157,64 @@ func (d *Yun139) request(pathname string, method string, callback base.ReqCallba
}
return res.Body(), nil
}
func (d *Yun139) requestRoute(data interface{}, resp interface{}) ([]byte, error) {
url := "https://user-njs.yun.139.com/user/route/qryRoutePolicy"
req := base.RestyClient.R()
randStr := random.String(16)
ts := time.Now().Format("2006-01-02 15:04:05")
callback := func(req *resty.Request) {
req.SetBody(data)
}
if callback != nil {
callback(req)
}
body, err := utils.Json.Marshal(req.Body)
if err != nil {
return nil, err
}
sign := calSign(string(body), ts, randStr)
svcType := "1"
if d.isFamily() {
svcType = "2"
}
req.SetHeaders(map[string]string{
"Accept": "application/json, text/plain, */*",
"CMS-DEVICE": "default",
"Authorization": "Basic " + d.getAuthorization(),
"mcloud-channel": "1000101",
"mcloud-client": "10701",
//"mcloud-route": "001",
"mcloud-sign": fmt.Sprintf("%s,%s,%s", ts, randStr, sign),
//"mcloud-skey":"",
"mcloud-version": "7.14.0",
"Origin": "https://yun.139.com",
"Referer": "https://yun.139.com/w/",
"x-DeviceInfo": "||9|7.14.0|chrome|120.0.0.0|||windows 10||zh-CN|||",
"x-huawei-channelSrc": "10000034",
"x-inner-ntwk": "2",
"x-m4c-caller": "PC",
"x-m4c-src": "10002",
"x-SvcType": svcType,
"Inner-Hcy-Router-Https": "1",
})
var e BaseResp
req.SetResult(&e)
res, err := req.Execute(http.MethodPost, url)
log.Debugln(res.String())
if !e.Success {
return nil, errors.New(e.Message)
}
if resp != nil {
err = utils.Json.Unmarshal(res.Body(), resp)
if err != nil {
return nil, err
}
}
return res.Body(), nil
}
func (d *Yun139) post(pathname string, data interface{}, resp interface{}) ([]byte, error) {
return d.request(pathname, http.MethodPost, func(req *resty.Request) {
req.SetBody(data)
@ -390,7 +449,7 @@ func unicode(str string) string {
}
func (d *Yun139) personalRequest(pathname string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
url := "https://personal-kd-njs.yun.139.com" + pathname
url := d.getPersonalCloudHost() + pathname
req := base.RestyClient.R()
randStr := random.String(16)
ts := time.Now().Format("2006-01-02 15:04:05")
@ -416,8 +475,6 @@ func (d *Yun139) personalRequest(pathname string, method string, callback base.R
"Mcloud-Route": "001",
"Mcloud-Sign": fmt.Sprintf("%s,%s,%s", ts, randStr, sign),
"Mcloud-Version": "7.14.0",
"Origin": "https://yun.139.com",
"Referer": "https://yun.139.com/w/",
"x-DeviceInfo": "||9|7.14.0|chrome|120.0.0.0|||windows 10||zh-CN|||",
"x-huawei-channelSrc": "10000034",
"x-inner-ntwk": "2",
@ -479,7 +536,7 @@ func (d *Yun139) personalGetFiles(fileId string) ([]model.Obj, error) {
"parentFileId": fileId,
}
var resp PersonalListResp
_, err := d.personalPost("/hcy/file/list", data, &resp)
_, err := d.personalPost("/file/list", data, &resp)
if err != nil {
return nil, err
}
@ -499,7 +556,15 @@ func (d *Yun139) personalGetFiles(fileId string) ([]model.Obj, error) {
} else {
var Thumbnails = item.Thumbnails
var ThumbnailUrl string
if len(Thumbnails) > 0 {
if d.UseLargeThumbnail {
for _, thumb := range Thumbnails {
if strings.Contains(thumb.Style, "Large") {
ThumbnailUrl = thumb.Url
break
}
}
}
if ThumbnailUrl == "" && len(Thumbnails) > 0 {
ThumbnailUrl = Thumbnails[len(Thumbnails)-1].Url
}
f = &model.ObjThumb{
@ -527,7 +592,7 @@ func (d *Yun139) personalGetLink(fileId string) (string, error) {
data := base.Json{
"fileId": fileId,
}
res, err := d.personalPost("/hcy/file/getDownloadUrl",
res, err := d.personalPost("/file/getDownloadUrl",
data, nil)
if err != nil {
return "", err
@ -552,3 +617,9 @@ func (d *Yun139) getAccount() string {
}
return d.Account
}
func (d *Yun139) getPersonalCloudHost() string {
if d.ref != nil {
return d.ref.getPersonalCloudHost()
}
return d.PersonalCloudHost
}

View File

@ -3,16 +3,15 @@ package _189pc
import (
"bytes"
"context"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"encoding/xml"
"fmt"
"io"
"math"
"net/http"
"net/http/cookiejar"
"net/url"
"os"
"regexp"
"sort"
"strconv"
@ -28,6 +27,7 @@ import (
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/setting"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/errgroup"
"github.com/alist-org/alist/v3/pkg/utils"
@ -473,12 +473,8 @@ func (y *Cloud189PC) refreshSession() (err error) {
// 普通上传
// 无法上传大小为0的文件
func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
var sliceSize = partSize(file.GetSize())
count := int(math.Ceil(float64(file.GetSize()) / float64(sliceSize)))
lastPartSize := file.GetSize() % sliceSize
if file.GetSize() > 0 && lastPartSize == 0 {
lastPartSize = sliceSize
}
size := file.GetSize()
sliceSize := partSize(size)
params := Params{
"parentFolderId": dstDir.GetID(),
@ -512,25 +508,29 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
retry.DelayType(retry.BackOffDelay))
sem := semaphore.NewWeighted(3)
fileMd5 := md5.New()
silceMd5 := md5.New()
count := int(size / sliceSize)
lastPartSize := size % sliceSize
if lastPartSize > 0 {
count++
} else {
lastPartSize = sliceSize
}
fileMd5 := utils.MD5.NewFunc()
silceMd5 := utils.MD5.NewFunc()
silceMd5Hexs := make([]string, 0, count)
teeReader := io.TeeReader(file, io.MultiWriter(fileMd5, silceMd5))
byteSize := sliceSize
for i := 1; i <= count; i++ {
if utils.IsCanceled(upCtx) {
break
}
if err = sem.Acquire(ctx, 1); err != nil {
break
}
byteData := make([]byte, sliceSize)
if i == count {
byteData = byteData[:lastPartSize]
byteSize = lastPartSize
}
byteData := make([]byte, byteSize)
// 读取块
silceMd5.Reset()
if _, err := io.ReadFull(io.TeeReader(file, io.MultiWriter(fileMd5, silceMd5)), byteData); err != io.EOF && err != nil {
if _, err := io.ReadFull(teeReader, byteData); err != io.EOF && err != nil {
sem.Release(1)
return nil, err
}
@ -541,6 +541,9 @@ func (y *Cloud189PC) StreamUpload(ctx context.Context, dstDir model.Obj, file mo
partInfo := fmt.Sprintf("%d-%s", i, base64.StdEncoding.EncodeToString(md5Bytes))
threadG.Go(func(ctx context.Context) error {
if err = sem.Acquire(ctx, 1); err != nil {
return err
}
defer sem.Release(1)
uploadUrls, err := y.GetMultiUploadUrls(ctx, isFamily, initMultiUpload.Data.UploadFileID, partInfo)
if err != nil {
@ -607,24 +610,43 @@ func (y *Cloud189PC) RapidUpload(ctx context.Context, dstDir model.Obj, stream m
// 快传
func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
tempFile, err := file.CacheFullInTempFile()
if err != nil {
return nil, err
var (
cache = file.GetFile()
tmpF *os.File
err error
)
size := file.GetSize()
if _, ok := cache.(io.ReaderAt); !ok && size > 0 {
tmpF, err = os.CreateTemp(conf.Conf.TempDir, "file-*")
if err != nil {
return nil, err
}
defer func() {
_ = tmpF.Close()
_ = os.Remove(tmpF.Name())
}()
cache = tmpF
}
var sliceSize = partSize(file.GetSize())
count := int(math.Ceil(float64(file.GetSize()) / float64(sliceSize)))
lastSliceSize := file.GetSize() % sliceSize
if file.GetSize() > 0 && lastSliceSize == 0 {
sliceSize := partSize(size)
count := int(size / sliceSize)
lastSliceSize := size % sliceSize
if lastSliceSize > 0 {
count++
} else {
lastSliceSize = sliceSize
}
//step.1 优先计算所需信息
byteSize := sliceSize
fileMd5 := md5.New()
silceMd5 := md5.New()
silceMd5Hexs := make([]string, 0, count)
fileMd5 := utils.MD5.NewFunc()
sliceMd5 := utils.MD5.NewFunc()
sliceMd5Hexs := make([]string, 0, count)
partInfos := make([]string, 0, count)
writers := []io.Writer{fileMd5, sliceMd5}
if tmpF != nil {
writers = append(writers, tmpF)
}
written := int64(0)
for i := 1; i <= count; i++ {
if utils.IsCanceled(ctx) {
return nil, ctx.Err()
@ -634,19 +656,31 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
byteSize = lastSliceSize
}
silceMd5.Reset()
if _, err := utils.CopyWithBufferN(io.MultiWriter(fileMd5, silceMd5), tempFile, byteSize); err != nil && err != io.EOF {
n, err := utils.CopyWithBufferN(io.MultiWriter(writers...), file, byteSize)
written += n
if err != nil && err != io.EOF {
return nil, err
}
md5Byte := silceMd5.Sum(nil)
silceMd5Hexs = append(silceMd5Hexs, strings.ToUpper(hex.EncodeToString(md5Byte)))
md5Byte := sliceMd5.Sum(nil)
sliceMd5Hexs = append(sliceMd5Hexs, strings.ToUpper(hex.EncodeToString(md5Byte)))
partInfos = append(partInfos, fmt.Sprint(i, "-", base64.StdEncoding.EncodeToString(md5Byte)))
sliceMd5.Reset()
}
if tmpF != nil {
if size > 0 && written != size {
return nil, errs.NewErr(err, "CreateTempFile failed, incoming stream actual size= %d, expect = %d ", written, size)
}
_, err = tmpF.Seek(0, io.SeekStart)
if err != nil {
return nil, errs.NewErr(err, "CreateTempFile failed, can't seek to 0 ")
}
}
fileMd5Hex := strings.ToUpper(hex.EncodeToString(fileMd5.Sum(nil)))
sliceMd5Hex := fileMd5Hex
if file.GetSize() > sliceSize {
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(silceMd5Hexs, "\n")))
if size > sliceSize {
sliceMd5Hex = strings.ToUpper(utils.GetMD5EncodeStr(strings.Join(sliceMd5Hexs, "\n")))
}
fullUrl := UPLOAD_URL
@ -712,7 +746,7 @@ func (y *Cloud189PC) FastUpload(ctx context.Context, dstDir model.Obj, file mode
}
// step.4 上传切片
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, io.NewSectionReader(tempFile, offset, byteSize), isFamily)
_, err = y.put(ctx, uploadUrl.RequestURL, uploadUrl.Headers, false, io.NewSectionReader(cache, offset, byteSize), isFamily)
if err != nil {
return err
}
@ -794,11 +828,7 @@ func (y *Cloud189PC) GetMultiUploadUrls(ctx context.Context, isFamily bool, uplo
// 旧版本上传,家庭云不支持覆盖
func (y *Cloud189PC) OldUpload(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, isFamily bool, overwrite bool) (model.Obj, error) {
tempFile, err := file.CacheFullInTempFile()
if err != nil {
return nil, err
}
fileMd5, err := utils.HashFile(utils.MD5, tempFile)
tempFile, fileMd5, err := stream.CacheFullInTempFileAndHash(file, utils.MD5)
if err != nil {
return nil, err
}

View File

@ -3,6 +3,7 @@ package alias
import (
"context"
"errors"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/internal/driver"
@ -126,8 +127,46 @@ func (d *Alias) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (
return nil, errs.ObjectNotFound
}
func (d *Alias) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, parentDir, true)
if err == nil {
return fs.MakeDir(ctx, stdpath.Join(*reqPath, dirName))
}
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot make sub-dir")
}
return err
}
func (d *Alias) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
if !d.Writable {
return errs.PermissionDenied
}
srcPath, err := d.getReqPath(ctx, srcObj, false)
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot be moved")
}
if err != nil {
return err
}
dstPath, err := d.getReqPath(ctx, dstDir, true)
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be moved to")
}
if err != nil {
return err
}
return fs.Move(ctx, *srcPath, *dstPath)
}
func (d *Alias) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
reqPath, err := d.getReqPath(ctx, srcObj)
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, srcObj, false)
if err == nil {
return fs.Rename(ctx, *reqPath, newName)
}
@ -137,8 +176,33 @@ func (d *Alias) Rename(ctx context.Context, srcObj model.Obj, newName string) er
return err
}
func (d *Alias) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
if !d.Writable {
return errs.PermissionDenied
}
srcPath, err := d.getReqPath(ctx, srcObj, false)
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot be copied")
}
if err != nil {
return err
}
dstPath, err := d.getReqPath(ctx, dstDir, true)
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be copied to")
}
if err != nil {
return err
}
_, err = fs.Copy(ctx, *srcPath, *dstPath)
return err
}
func (d *Alias) Remove(ctx context.Context, obj model.Obj) error {
reqPath, err := d.getReqPath(ctx, obj)
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, obj, false)
if err == nil {
return fs.Remove(ctx, *reqPath)
}
@ -148,4 +212,110 @@ func (d *Alias) Remove(ctx context.Context, obj model.Obj) error {
return err
}
func (d *Alias) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) error {
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, dstDir, true)
if err == nil {
return fs.PutDirectly(ctx, *reqPath, s)
}
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be Put")
}
return err
}
func (d *Alias) PutURL(ctx context.Context, dstDir model.Obj, name, url string) error {
if !d.Writable {
return errs.PermissionDenied
}
reqPath, err := d.getReqPath(ctx, dstDir, true)
if err == nil {
return fs.PutURL(ctx, *reqPath, name, url)
}
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot offline download")
}
return err
}
func (d *Alias) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
root, sub := d.getRootAndPath(obj.GetPath())
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
for _, dst := range dsts {
meta, err := d.getArchiveMeta(ctx, dst, sub, args)
if err == nil {
return meta, nil
}
}
return nil, errs.NotImplement
}
func (d *Alias) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
root, sub := d.getRootAndPath(obj.GetPath())
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
for _, dst := range dsts {
l, err := d.listArchive(ctx, dst, sub, args)
if err == nil {
return l, nil
}
}
return nil, errs.NotImplement
}
func (d *Alias) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// alias的两个驱动一个支持驱动提取一个不支持如何兼容
// 如果访问的是不支持驱动提取的驱动内的压缩文件GetArchiveMeta就会返回errs.NotImplement提取URL前缀就会是/aeExtract就不会被调用
// 如果访问的是支持驱动提取的驱动内的压缩文件GetArchiveMeta就会返回有效值提取URL前缀就会是/adExtract就会被调用
root, sub := d.getRootAndPath(obj.GetPath())
dsts, ok := d.pathMap[root]
if !ok {
return nil, errs.ObjectNotFound
}
for _, dst := range dsts {
link, err := d.extract(ctx, dst, sub, args)
if err == nil {
if !args.Redirect && len(link.URL) > 0 {
if d.DownloadConcurrency > 0 {
link.Concurrency = d.DownloadConcurrency
}
if d.DownloadPartSize > 0 {
link.PartSize = d.DownloadPartSize * utils.KB
}
}
return link, nil
}
}
return nil, errs.NotImplement
}
func (d *Alias) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) error {
if !d.Writable {
return errs.PermissionDenied
}
srcPath, err := d.getReqPath(ctx, srcObj, false)
if errs.IsNotImplement(err) {
return errors.New("same-name files cannot be decompressed")
}
if err != nil {
return err
}
dstPath, err := d.getReqPath(ctx, dstDir, true)
if errs.IsNotImplement(err) {
return errors.New("same-name dirs cannot be decompressed to")
}
if err != nil {
return err
}
_, err = fs.ArchiveDecompress(ctx, *srcPath, *dstPath, args)
return err
}
var _ driver.Driver = (*Alias)(nil)

View File

@ -13,13 +13,14 @@ type Addition struct {
ProtectSameName bool `json:"protect_same_name" default:"true" required:"false" help:"Protects same-name files from Delete or Rename"`
DownloadConcurrency int `json:"download_concurrency" default:"0" required:"false" type:"number" help:"Need to enable proxy"`
DownloadPartSize int `json:"download_part_size" default:"0" type:"number" required:"false" help:"Need to enable proxy. Unit: KB"`
Writable bool `json:"writable" type:"bool" default:"false"`
}
var config = driver.Config{
Name: "Alias",
LocalSort: true,
NoCache: true,
NoUpload: true,
NoUpload: false,
DefaultRoot: "/",
ProxyRangeOption: true,
}

View File

@ -3,9 +3,11 @@ package alias
import (
"context"
"fmt"
"net/url"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/fs"
"github.com/alist-org/alist/v3/internal/model"
@ -125,9 +127,9 @@ func (d *Alias) link(ctx context.Context, dst, sub string, args model.LinkArgs)
return link, err
}
func (d *Alias) getReqPath(ctx context.Context, obj model.Obj) (*string, error) {
func (d *Alias) getReqPath(ctx context.Context, obj model.Obj, isParent bool) (*string, error) {
root, sub := d.getRootAndPath(obj.GetPath())
if sub == "" {
if sub == "" && !isParent {
return nil, errs.NotSupport
}
dsts, ok := d.pathMap[root]
@ -156,3 +158,68 @@ func (d *Alias) getReqPath(ctx context.Context, obj model.Obj) (*string, error)
}
return reqPath, nil
}
func (d *Alias) getArchiveMeta(ctx context.Context, dst, sub string, args model.ArchiveArgs) (model.ArchiveMeta, error) {
reqPath := stdpath.Join(dst, sub)
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil {
return nil, err
}
if _, ok := storage.(driver.ArchiveReader); ok {
return op.GetArchiveMeta(ctx, storage, reqActualPath, model.ArchiveMetaArgs{
ArchiveArgs: args,
Refresh: true,
})
}
return nil, errs.NotImplement
}
func (d *Alias) listArchive(ctx context.Context, dst, sub string, args model.ArchiveInnerArgs) ([]model.Obj, error) {
reqPath := stdpath.Join(dst, sub)
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil {
return nil, err
}
if _, ok := storage.(driver.ArchiveReader); ok {
return op.ListArchive(ctx, storage, reqActualPath, model.ArchiveListArgs{
ArchiveInnerArgs: args,
Refresh: true,
})
}
return nil, errs.NotImplement
}
func (d *Alias) extract(ctx context.Context, dst, sub string, args model.ArchiveInnerArgs) (*model.Link, error) {
reqPath := stdpath.Join(dst, sub)
storage, reqActualPath, err := op.GetStorageAndActualPath(reqPath)
if err != nil {
return nil, err
}
if _, ok := storage.(driver.ArchiveReader); ok {
if _, ok := storage.(*Alias); !ok && !args.Redirect {
link, _, err := op.DriverExtract(ctx, storage, reqActualPath, args)
return link, err
}
_, err = fs.Get(ctx, reqPath, &fs.GetArgs{NoLog: true})
if err != nil {
return nil, err
}
if common.ShouldProxy(storage, stdpath.Base(sub)) {
link := &model.Link{
URL: fmt.Sprintf("%s/ap%s?inner=%s&pass=%s&sign=%s",
common.GetApiUrl(args.HttpReq),
utils.EncodePath(reqPath, true),
utils.EncodePath(args.InnerPath, true),
url.QueryEscape(args.Password),
sign.SignArchive(reqPath)),
}
if args.HttpReq != nil && d.ProxyRange {
link.RangeReadCloser = common.NoProxyRange
}
return link, nil
}
link, _, err := op.DriverExtract(ctx, storage, reqActualPath, args)
return link, err
}
return nil, errs.NotImplement
}

View File

@ -5,12 +5,14 @@ import (
"fmt"
"io"
"net/http"
"net/url"
"path"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/alist-org/alist/v3/server/common"
@ -34,7 +36,7 @@ func (d *AListV3) GetAddition() driver.Additional {
func (d *AListV3) Init(ctx context.Context) error {
d.Addition.Address = strings.TrimSuffix(d.Addition.Address, "/")
var resp common.Resp[MeResp]
_, err := d.request("/me", http.MethodGet, func(req *resty.Request) {
_, _, err := d.request("/me", http.MethodGet, func(req *resty.Request) {
req.SetResult(&resp)
})
if err != nil {
@ -48,15 +50,15 @@ func (d *AListV3) Init(ctx context.Context) error {
}
}
// re-get the user info
_, err = d.request("/me", http.MethodGet, func(req *resty.Request) {
_, _, err = d.request("/me", http.MethodGet, func(req *resty.Request) {
req.SetResult(&resp)
})
if err != nil {
return err
}
if resp.Data.Role == model.GUEST {
url := d.Address + "/api/public/settings"
res, err := base.RestyClient.R().Get(url)
u := d.Address + "/api/public/settings"
res, err := base.RestyClient.R().Get(u)
if err != nil {
return err
}
@ -74,7 +76,7 @@ func (d *AListV3) Drop(ctx context.Context) error {
func (d *AListV3) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var resp common.Resp[FsListResp]
_, err := d.request("/fs/list", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/list", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ListReq{
PageReq: model.PageReq{
Page: 1,
@ -116,7 +118,7 @@ func (d *AListV3) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
userAgent = base.UserAgent
}
}
_, err := d.request("/fs/get", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/get", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(FsGetReq{
Path: file.GetPath(),
Password: d.MetaPassword,
@ -131,7 +133,7 @@ func (d *AListV3) Link(ctx context.Context, file model.Obj, args model.LinkArgs)
}
func (d *AListV3) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
_, err := d.request("/fs/mkdir", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/mkdir", http.MethodPost, func(req *resty.Request) {
req.SetBody(MkdirOrLinkReq{
Path: path.Join(parentDir.GetPath(), dirName),
})
@ -140,7 +142,7 @@ func (d *AListV3) MakeDir(ctx context.Context, parentDir model.Obj, dirName stri
}
func (d *AListV3) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
_, err := d.request("/fs/move", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/move", http.MethodPost, func(req *resty.Request) {
req.SetBody(MoveCopyReq{
SrcDir: path.Dir(srcObj.GetPath()),
DstDir: dstDir.GetPath(),
@ -151,7 +153,7 @@ func (d *AListV3) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
}
func (d *AListV3) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
_, err := d.request("/fs/rename", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/rename", http.MethodPost, func(req *resty.Request) {
req.SetBody(RenameReq{
Path: srcObj.GetPath(),
Name: newName,
@ -161,7 +163,7 @@ func (d *AListV3) Rename(ctx context.Context, srcObj model.Obj, newName string)
}
func (d *AListV3) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
_, err := d.request("/fs/copy", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/copy", http.MethodPost, func(req *resty.Request) {
req.SetBody(MoveCopyReq{
SrcDir: path.Dir(srcObj.GetPath()),
DstDir: dstDir.GetPath(),
@ -172,7 +174,7 @@ func (d *AListV3) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
}
func (d *AListV3) Remove(ctx context.Context, obj model.Obj) error {
_, err := d.request("/fs/remove", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/fs/remove", http.MethodPost, func(req *resty.Request) {
req.SetBody(RemoveReq{
Dir: path.Dir(obj.GetPath()),
Names: []string{obj.GetName()},
@ -232,6 +234,127 @@ func (d *AListV3) Put(ctx context.Context, dstDir model.Obj, s model.FileStreame
return nil
}
func (d *AListV3) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
if !d.ForwardArchiveReq {
return nil, errs.NotImplement
}
var resp common.Resp[ArchiveMetaResp]
_, code, err := d.request("/fs/archive/meta", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveMetaReq{
ArchivePass: args.Password,
Password: d.MetaPassword,
Path: obj.GetPath(),
Refresh: false,
})
})
if code == 202 {
return nil, errs.WrongArchivePassword
}
if err != nil {
return nil, err
}
var tree []model.ObjTree
if resp.Data.Content != nil {
tree = make([]model.ObjTree, 0, len(resp.Data.Content))
for _, content := range resp.Data.Content {
tree = append(tree, &content)
}
}
return &model.ArchiveMetaInfo{
Comment: resp.Data.Comment,
Encrypted: resp.Data.Encrypted,
Tree: tree,
}, nil
}
func (d *AListV3) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
if !d.ForwardArchiveReq {
return nil, errs.NotImplement
}
var resp common.Resp[ArchiveListResp]
_, code, err := d.request("/fs/archive/list", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveListReq{
ArchiveMetaReq: ArchiveMetaReq{
ArchivePass: args.Password,
Password: d.MetaPassword,
Path: obj.GetPath(),
Refresh: false,
},
PageReq: model.PageReq{
Page: 1,
PerPage: 0,
},
InnerPath: args.InnerPath,
})
})
if code == 202 {
return nil, errs.WrongArchivePassword
}
if err != nil {
return nil, err
}
var files []model.Obj
for _, f := range resp.Data.Content {
file := model.ObjThumb{
Object: model.Object{
Name: f.Name,
Modified: f.Modified,
Ctime: f.Created,
Size: f.Size,
IsFolder: f.IsDir,
HashInfo: utils.FromString(f.HashInfo),
},
Thumbnail: model.Thumbnail{Thumbnail: f.Thumb},
}
files = append(files, &file)
}
return files, nil
}
func (d *AListV3) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
if !d.ForwardArchiveReq {
return nil, errs.NotSupport
}
var resp common.Resp[ArchiveMetaResp]
_, _, err := d.request("/fs/archive/meta", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(ArchiveMetaReq{
ArchivePass: args.Password,
Password: d.MetaPassword,
Path: obj.GetPath(),
Refresh: false,
})
})
if err != nil {
return nil, err
}
return &model.Link{
URL: fmt.Sprintf("%s?inner=%s&pass=%s&sign=%s",
resp.Data.RawURL,
utils.EncodePath(args.InnerPath, true),
url.QueryEscape(args.Password),
resp.Data.Sign),
}, nil
}
func (d *AListV3) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) error {
if !d.ForwardArchiveReq {
return errs.NotImplement
}
dir, name := path.Split(srcObj.GetPath())
_, _, err := d.request("/fs/archive/decompress", http.MethodPost, func(req *resty.Request) {
req.SetBody(DecompressReq{
ArchivePass: args.Password,
CacheFull: args.CacheFull,
DstDir: dstDir.GetPath(),
InnerPath: args.InnerPath,
Name: []string{name},
PutIntoNewDir: args.PutIntoNewDir,
SrcDir: dir,
})
})
return err
}
//func (d *AList) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}

View File

@ -7,12 +7,13 @@ import (
type Addition struct {
driver.RootPath
Address string `json:"url" required:"true"`
MetaPassword string `json:"meta_password"`
Username string `json:"username"`
Password string `json:"password"`
Token string `json:"token"`
PassUAToUpsteam bool `json:"pass_ua_to_upsteam" default:"true"`
Address string `json:"url" required:"true"`
MetaPassword string `json:"meta_password"`
Username string `json:"username"`
Password string `json:"password"`
Token string `json:"token"`
PassUAToUpsteam bool `json:"pass_ua_to_upsteam" default:"true"`
ForwardArchiveReq bool `json:"forward_archive_requests" default:"true"`
}
var config = driver.Config{

View File

@ -4,6 +4,7 @@ import (
"time"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
)
type ListReq struct {
@ -81,3 +82,89 @@ type MeResp struct {
SsoId string `json:"sso_id"`
Otp bool `json:"otp"`
}
type ArchiveMetaReq struct {
ArchivePass string `json:"archive_pass"`
Password string `json:"password"`
Path string `json:"path"`
Refresh bool `json:"refresh"`
}
type TreeResp struct {
ObjResp
Children []TreeResp `json:"children"`
hashCache *utils.HashInfo
}
func (t *TreeResp) GetSize() int64 {
return t.Size
}
func (t *TreeResp) GetName() string {
return t.Name
}
func (t *TreeResp) ModTime() time.Time {
return t.Modified
}
func (t *TreeResp) CreateTime() time.Time {
return t.Created
}
func (t *TreeResp) IsDir() bool {
return t.ObjResp.IsDir
}
func (t *TreeResp) GetHash() utils.HashInfo {
return utils.FromString(t.HashInfo)
}
func (t *TreeResp) GetID() string {
return ""
}
func (t *TreeResp) GetPath() string {
return ""
}
func (t *TreeResp) GetChildren() []model.ObjTree {
ret := make([]model.ObjTree, 0, len(t.Children))
for _, child := range t.Children {
ret = append(ret, &child)
}
return ret
}
func (t *TreeResp) Thumb() string {
return t.ObjResp.Thumb
}
type ArchiveMetaResp struct {
Comment string `json:"comment"`
Encrypted bool `json:"encrypted"`
Content []TreeResp `json:"content"`
RawURL string `json:"raw_url"`
Sign string `json:"sign"`
}
type ArchiveListReq struct {
model.PageReq
ArchiveMetaReq
InnerPath string `json:"inner_path"`
}
type ArchiveListResp struct {
Content []ObjResp `json:"content"`
Total int64 `json:"total"`
}
type DecompressReq struct {
ArchivePass string `json:"archive_pass"`
CacheFull bool `json:"cache_full"`
DstDir string `json:"dst_dir"`
InnerPath string `json:"inner_path"`
Name []string `json:"name"`
PutIntoNewDir bool `json:"put_into_new_dir"`
SrcDir string `json:"src_dir"`
}

View File

@ -17,7 +17,7 @@ func (d *AListV3) login() error {
return nil
}
var resp common.Resp[LoginResp]
_, err := d.request("/auth/login", http.MethodPost, func(req *resty.Request) {
_, _, err := d.request("/auth/login", http.MethodPost, func(req *resty.Request) {
req.SetResult(&resp).SetBody(base.Json{
"username": d.Username,
"password": d.Password,
@ -31,7 +31,7 @@ func (d *AListV3) login() error {
return nil
}
func (d *AListV3) request(api, method string, callback base.ReqCallback, retry ...bool) ([]byte, error) {
func (d *AListV3) request(api, method string, callback base.ReqCallback, retry ...bool) ([]byte, int, error) {
url := d.Address + "/api" + api
req := base.RestyClient.R()
req.SetHeader("Authorization", d.Token)
@ -40,22 +40,26 @@ func (d *AListV3) request(api, method string, callback base.ReqCallback, retry .
}
res, err := req.Execute(method, url)
if err != nil {
return nil, err
code := 0
if res != nil {
code = res.StatusCode()
}
return nil, code, err
}
log.Debugf("[alist_v3] response body: %s", res.String())
if res.StatusCode() >= 400 {
return nil, fmt.Errorf("request failed, status: %s", res.Status())
return nil, res.StatusCode(), fmt.Errorf("request failed, status: %s", res.Status())
}
code := utils.Json.Get(res.Body(), "code").ToInt()
if code != 200 {
if (code == 401 || code == 403) && !utils.IsBool(retry...) {
err = d.login()
if err != nil {
return nil, err
return nil, code, err
}
return d.request(api, method, callback, true)
}
return nil, fmt.Errorf("request failed,code: %d, message: %s", code, utils.Json.Get(res.Body(), "message").ToString())
return nil, code, fmt.Errorf("request failed,code: %d, message: %s", code, utils.Json.Get(res.Body(), "message").ToString())
}
return res.Body(), nil
return res.Body(), 200, nil
}

View File

@ -5,6 +5,7 @@ import (
"errors"
"fmt"
"net/http"
"path/filepath"
"time"
"github.com/Xhofe/rateg"
@ -14,6 +15,7 @@ import (
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus"
)
type AliyundriveOpen struct {
@ -72,6 +74,18 @@ func (d *AliyundriveOpen) Drop(ctx context.Context) error {
return nil
}
// GetRoot implements the driver.GetRooter interface to properly set up the root object
func (d *AliyundriveOpen) GetRoot(ctx context.Context) (model.Obj, error) {
return &model.Object{
ID: d.RootFolderID,
Path: "/",
Name: "root",
Size: 0,
Modified: d.Modified,
IsFolder: true,
}, nil
}
func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
if d.limitList == nil {
return nil, fmt.Errorf("driver not init")
@ -80,9 +94,17 @@ func (d *AliyundriveOpen) List(ctx context.Context, dir model.Obj, args model.Li
if err != nil {
return nil, err
}
return utils.SliceConvert(files, func(src File) (model.Obj, error) {
return fileToObj(src), nil
objs, err := utils.SliceConvert(files, func(src File) (model.Obj, error) {
obj := fileToObj(src)
// Set the correct path for the object
if dir.GetPath() != "" {
obj.Path = filepath.Join(dir.GetPath(), obj.GetName())
}
return obj, nil
})
return objs, err
}
func (d *AliyundriveOpen) link(ctx context.Context, file model.Obj) (*model.Link, error) {
@ -132,7 +154,16 @@ func (d *AliyundriveOpen) MakeDir(ctx context.Context, parentDir model.Obj, dirN
if err != nil {
return nil, err
}
return fileToObj(newDir), nil
obj := fileToObj(newDir)
// Set the correct Path for the returned directory object
if parentDir.GetPath() != "" {
obj.Path = filepath.Join(parentDir.GetPath(), dirName)
} else {
obj.Path = "/" + dirName
}
return obj, nil
}
func (d *AliyundriveOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
@ -142,20 +173,24 @@ func (d *AliyundriveOpen) Move(ctx context.Context, srcObj, dstDir model.Obj) (m
"drive_id": d.DriveId,
"file_id": srcObj.GetID(),
"to_parent_file_id": dstDir.GetID(),
"check_name_mode": "refuse", // optional:ignore,auto_rename,refuse
"check_name_mode": "ignore", // optional:ignore,auto_rename,refuse
//"new_name": "newName", // The new name to use when a file of the same name exists
}).SetResult(&resp)
})
if err != nil {
return nil, err
}
if resp.Exist {
return nil, errors.New("existence of files with the same name")
}
if srcObj, ok := srcObj.(*model.ObjThumb); ok {
srcObj.ID = resp.FileID
srcObj.Modified = time.Now()
srcObj.Path = filepath.Join(dstDir.GetPath(), srcObj.GetName())
// Check for duplicate files in the destination directory
if err := d.removeDuplicateFiles(ctx, dstDir.GetPath(), srcObj.GetName(), srcObj.GetID()); err != nil {
// Only log a warning instead of returning an error since the move operation has already completed successfully
log.Warnf("Failed to remove duplicate files after move: %v", err)
}
return srcObj, nil
}
return nil, nil
@ -173,19 +208,47 @@ func (d *AliyundriveOpen) Rename(ctx context.Context, srcObj model.Obj, newName
if err != nil {
return nil, err
}
return fileToObj(newFile), nil
// Check for duplicate files in the parent directory
parentPath := filepath.Dir(srcObj.GetPath())
if err := d.removeDuplicateFiles(ctx, parentPath, newName, newFile.FileId); err != nil {
// Only log a warning instead of returning an error since the rename operation has already completed successfully
log.Warnf("Failed to remove duplicate files after rename: %v", err)
}
obj := fileToObj(newFile)
// Set the correct Path for the renamed object
if parentPath != "" && parentPath != "." {
obj.Path = filepath.Join(parentPath, newName)
} else {
obj.Path = "/" + newName
}
return obj, nil
}
func (d *AliyundriveOpen) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
var resp MoveOrCopyResp
_, err := d.request("/adrive/v1.0/openFile/copy", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"drive_id": d.DriveId,
"file_id": srcObj.GetID(),
"to_parent_file_id": dstDir.GetID(),
"auto_rename": true,
})
"auto_rename": false,
}).SetResult(&resp)
})
return err
if err != nil {
return err
}
// Check for duplicate files in the destination directory
if err := d.removeDuplicateFiles(ctx, dstDir.GetPath(), srcObj.GetName(), resp.FileID); err != nil {
// Only log a warning instead of returning an error since the copy operation has already completed successfully
log.Warnf("Failed to remove duplicate files after copy: %v", err)
}
return nil
}
func (d *AliyundriveOpen) Remove(ctx context.Context, obj model.Obj) error {
@ -203,7 +266,18 @@ func (d *AliyundriveOpen) Remove(ctx context.Context, obj model.Obj) error {
}
func (d *AliyundriveOpen) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
return d.upload(ctx, dstDir, stream, up)
obj, err := d.upload(ctx, dstDir, stream, up)
// Set the correct Path for the returned file object
if obj != nil && obj.GetPath() == "" {
if dstDir.GetPath() != "" {
if objWithPath, ok := obj.(model.SetPath); ok {
objWithPath.SetPath(filepath.Join(dstDir.GetPath(), obj.GetName()))
}
}
}
return obj, err
}
func (d *AliyundriveOpen) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
@ -235,3 +309,4 @@ var _ driver.MkdirResult = (*AliyundriveOpen)(nil)
var _ driver.MoveResult = (*AliyundriveOpen)(nil)
var _ driver.RenameResult = (*AliyundriveOpen)(nil)
var _ driver.PutResult = (*AliyundriveOpen)(nil)
var _ driver.GetRooter = (*AliyundriveOpen)(nil)

View File

@ -1,7 +1,6 @@
package aliyundrive_open
import (
"bytes"
"context"
"encoding/base64"
"fmt"
@ -15,6 +14,7 @@ import (
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
streamPkg "github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/http_range"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/avast/retry-go"
@ -131,16 +131,19 @@ func (d *AliyundriveOpen) calProofCode(stream model.FileStreamer) (string, error
return "", err
}
length := proofRange.End - proofRange.Start
buf := bytes.NewBuffer(make([]byte, 0, length))
reader, err := stream.RangeRead(http_range.Range{Start: proofRange.Start, Length: length})
if err != nil {
return "", err
}
_, err = utils.CopyWithBufferN(buf, reader, length)
buf := make([]byte, length)
n, err := io.ReadFull(reader, buf)
if err == io.ErrUnexpectedEOF {
return "", fmt.Errorf("can't read data, expected=%d, got=%d", len(buf), n)
}
if err != nil {
return "", err
}
return base64.StdEncoding.EncodeToString(buf.Bytes()), nil
return base64.StdEncoding.EncodeToString(buf), nil
}
func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
@ -183,25 +186,18 @@ func (d *AliyundriveOpen) upload(ctx context.Context, dstDir model.Obj, stream m
_, err, e := d.requestReturnErrResp("/adrive/v1.0/openFile/create", http.MethodPost, func(req *resty.Request) {
req.SetBody(createData).SetResult(&createResp)
})
var tmpF model.File
if err != nil {
if e.Code != "PreHashMatched" || !rapidUpload {
return nil, err
}
log.Debugf("[aliyundrive_open] pre_hash matched, start rapid upload")
hi := stream.GetHash()
hash := hi.GetHash(utils.SHA1)
if len(hash) <= 0 {
tmpF, err = stream.CacheFullInTempFile()
hash := stream.GetHash().GetHash(utils.SHA1)
if len(hash) != utils.SHA1.Width {
_, hash, err = streamPkg.CacheFullInTempFileAndHash(stream, utils.SHA1)
if err != nil {
return nil, err
}
hash, err = utils.HashFile(utils.SHA1, tmpF)
if err != nil {
return nil, err
}
}
delete(createData, "pre_hash")

View File

@ -10,6 +10,7 @@ import (
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
@ -186,3 +187,36 @@ func (d *AliyundriveOpen) getAccessToken() string {
}
return d.AccessToken
}
// Remove duplicate files with the same name in the given directory path,
// preserving the file with the given skipID if provided
func (d *AliyundriveOpen) removeDuplicateFiles(ctx context.Context, parentPath string, fileName string, skipID string) error {
// Handle empty path (root directory) case
if parentPath == "" {
parentPath = "/"
}
// List all files in the parent directory
files, err := op.List(ctx, d, parentPath, model.ListArgs{})
if err != nil {
return err
}
// Find all files with the same name
var duplicates []model.Obj
for _, file := range files {
if file.GetName() == fileName && file.GetID() != skipID {
duplicates = append(duplicates, file)
}
}
// Remove all duplicates files, except the file with the given ID
for _, file := range duplicates {
err := d.Remove(ctx, file)
if err != nil {
return err
}
}
return nil
}

View File

@ -2,6 +2,7 @@ package drivers
import (
_ "github.com/alist-org/alist/v3/drivers/115"
_ "github.com/alist-org/alist/v3/drivers/115_open"
_ "github.com/alist-org/alist/v3/drivers/115_share"
_ "github.com/alist-org/alist/v3/drivers/123"
_ "github.com/alist-org/alist/v3/drivers/123_link"
@ -15,12 +16,16 @@ import (
_ "github.com/alist-org/alist/v3/drivers/aliyundrive"
_ "github.com/alist-org/alist/v3/drivers/aliyundrive_open"
_ "github.com/alist-org/alist/v3/drivers/aliyundrive_share"
_ "github.com/alist-org/alist/v3/drivers/azure_blob"
_ "github.com/alist-org/alist/v3/drivers/baidu_netdisk"
_ "github.com/alist-org/alist/v3/drivers/baidu_photo"
_ "github.com/alist-org/alist/v3/drivers/baidu_share"
_ "github.com/alist-org/alist/v3/drivers/chaoxing"
_ "github.com/alist-org/alist/v3/drivers/cloudreve"
_ "github.com/alist-org/alist/v3/drivers/cloudreve_v4"
_ "github.com/alist-org/alist/v3/drivers/crypt"
_ "github.com/alist-org/alist/v3/drivers/doubao"
_ "github.com/alist-org/alist/v3/drivers/doubao_share"
_ "github.com/alist-org/alist/v3/drivers/dropbox"
_ "github.com/alist-org/alist/v3/drivers/febbox"
_ "github.com/alist-org/alist/v3/drivers/ftp"

View File

@ -0,0 +1,313 @@
package azure_blob
import (
"context"
"fmt"
"io"
"path"
"regexp"
"strings"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/sas"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
)
// Azure Blob Storage based on the blob APIs
// Link: https://learn.microsoft.com/rest/api/storageservices/blob-service-rest-api
type AzureBlob struct {
model.Storage
Addition
client *azblob.Client
containerClient *container.Client
config driver.Config
}
// Config returns the driver configuration.
func (d *AzureBlob) Config() driver.Config {
return d.config
}
// GetAddition returns additional settings specific to Azure Blob Storage.
func (d *AzureBlob) GetAddition() driver.Additional {
return &d.Addition
}
// Init initializes the Azure Blob Storage client using shared key authentication.
func (d *AzureBlob) Init(ctx context.Context) error {
// Validate the endpoint URL
accountName := extractAccountName(d.Addition.Endpoint)
if !regexp.MustCompile(`^[a-z0-9]+$`).MatchString(accountName) {
return fmt.Errorf("invalid storage account name: must be chars of lowercase letters or numbers only")
}
credential, err := azblob.NewSharedKeyCredential(accountName, d.Addition.AccessKey)
if err != nil {
return fmt.Errorf("failed to create credential: %w", err)
}
// Check if Endpoint is just account name
endpoint := d.Addition.Endpoint
if accountName == endpoint {
endpoint = fmt.Sprintf("https://%s.blob.core.windows.net/", accountName)
}
// Initialize Azure Blob client with retry policy
client, err := azblob.NewClientWithSharedKeyCredential(endpoint, credential,
&azblob.ClientOptions{ClientOptions: azcore.ClientOptions{
Retry: policy.RetryOptions{
MaxRetries: MaxRetries,
RetryDelay: RetryDelay,
},
}})
if err != nil {
return fmt.Errorf("failed to create client: %w", err)
}
d.client = client
// Ensure container exists or create it
containerName := strings.Trim(d.Addition.ContainerName, "/ \\")
if containerName == "" {
return fmt.Errorf("container name cannot be empty")
}
return d.createContainerIfNotExists(ctx, containerName)
}
// Drop releases resources associated with the Azure Blob client.
func (d *AzureBlob) Drop(ctx context.Context) error {
d.client = nil
return nil
}
// List retrieves blobs and directories under the specified path.
func (d *AzureBlob) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
prefix := ensureTrailingSlash(dir.GetPath())
pager := d.containerClient.NewListBlobsHierarchyPager("/", &container.ListBlobsHierarchyOptions{
Prefix: &prefix,
})
var objs []model.Obj
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
return nil, fmt.Errorf("failed to list blobs: %w", err)
}
// Process directories
for _, blobPrefix := range page.Segment.BlobPrefixes {
objs = append(objs, &model.Object{
Name: path.Base(strings.TrimSuffix(*blobPrefix.Name, "/")),
Path: *blobPrefix.Name,
Modified: *blobPrefix.Properties.LastModified,
Ctime: *blobPrefix.Properties.CreationTime,
IsFolder: true,
})
}
// Process files
for _, blob := range page.Segment.BlobItems {
if strings.HasSuffix(*blob.Name, "/") {
continue
}
objs = append(objs, &model.Object{
Name: path.Base(*blob.Name),
Path: *blob.Name,
Size: *blob.Properties.ContentLength,
Modified: *blob.Properties.LastModified,
Ctime: *blob.Properties.CreationTime,
IsFolder: false,
})
}
}
return objs, nil
}
// Link generates a temporary SAS URL for accessing a blob.
func (d *AzureBlob) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
blobClient := d.containerClient.NewBlobClient(file.GetPath())
expireDuration := time.Hour * time.Duration(d.SignURLExpire)
sasURL, err := blobClient.GetSASURL(sas.BlobPermissions{Read: true}, time.Now().Add(expireDuration), nil)
if err != nil {
return nil, fmt.Errorf("failed to generate SAS URL: %w", err)
}
return &model.Link{URL: sasURL}, nil
}
// MakeDir creates a virtual directory by uploading an empty blob as a marker.
func (d *AzureBlob) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
dirPath := path.Join(parentDir.GetPath(), dirName)
if err := d.mkDir(ctx, dirPath); err != nil {
return nil, fmt.Errorf("failed to create directory marker: %w", err)
}
return &model.Object{
Path: dirPath,
Name: dirName,
IsFolder: true,
}, nil
}
// Move relocates an object (file or directory) to a new directory.
func (d *AzureBlob) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
srcPath := srcObj.GetPath()
dstPath := path.Join(dstDir.GetPath(), srcObj.GetName())
if err := d.moveOrRename(ctx, srcPath, dstPath, srcObj.IsDir(), srcObj.GetSize()); err != nil {
return nil, fmt.Errorf("move operation failed: %w", err)
}
return &model.Object{
Path: dstPath,
Name: srcObj.GetName(),
Modified: time.Now(),
IsFolder: srcObj.IsDir(),
Size: srcObj.GetSize(),
}, nil
}
// Rename changes the name of an existing object.
func (d *AzureBlob) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
srcPath := srcObj.GetPath()
dstPath := path.Join(path.Dir(srcPath), newName)
if err := d.moveOrRename(ctx, srcPath, dstPath, srcObj.IsDir(), srcObj.GetSize()); err != nil {
return nil, fmt.Errorf("rename operation failed: %w", err)
}
return &model.Object{
Path: dstPath,
Name: newName,
Modified: time.Now(),
IsFolder: srcObj.IsDir(),
Size: srcObj.GetSize(),
}, nil
}
// Copy duplicates an object (file or directory) to a specified destination directory.
func (d *AzureBlob) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
dstPath := path.Join(dstDir.GetPath(), srcObj.GetName())
// Handle directory copying using flat listing
if srcObj.IsDir() {
srcPrefix := srcObj.GetPath()
srcPrefix = ensureTrailingSlash(srcPrefix)
// Get all blobs under the source directory
blobs, err := d.flattenListBlobs(ctx, srcPrefix)
if err != nil {
return nil, fmt.Errorf("failed to list source directory contents: %w", err)
}
// Process each blob - copy to destination
for _, blob := range blobs {
// Skip the directory marker itself
if *blob.Name == srcPrefix {
continue
}
// Calculate relative path from source
relPath := strings.TrimPrefix(*blob.Name, srcPrefix)
itemDstPath := path.Join(dstPath, relPath)
if strings.HasSuffix(itemDstPath, "/") || (blob.Metadata["hdi_isfolder"] != nil && *blob.Metadata["hdi_isfolder"] == "true") {
// Create directory marker at destination
err := d.mkDir(ctx, itemDstPath)
if err != nil {
return nil, fmt.Errorf("failed to create directory marker [%s]: %w", itemDstPath, err)
}
} else {
// Copy the blob
if err := d.copyFile(ctx, *blob.Name, itemDstPath); err != nil {
return nil, fmt.Errorf("failed to copy %s: %w", *blob.Name, err)
}
}
}
// Create directory marker at destination if needed
if len(blobs) == 0 {
err := d.mkDir(ctx, dstPath)
if err != nil {
return nil, fmt.Errorf("failed to create directory [%s]: %w", dstPath, err)
}
}
return &model.Object{
Path: dstPath,
Name: srcObj.GetName(),
Modified: time.Now(),
IsFolder: true,
}, nil
}
// Copy a single file
if err := d.copyFile(ctx, srcObj.GetPath(), dstPath); err != nil {
return nil, fmt.Errorf("failed to copy blob: %w", err)
}
return &model.Object{
Path: dstPath,
Name: srcObj.GetName(),
Size: srcObj.GetSize(),
Modified: time.Now(),
IsFolder: false,
}, nil
}
// Remove deletes a specified blob or recursively deletes a directory and its contents.
func (d *AzureBlob) Remove(ctx context.Context, obj model.Obj) error {
path := obj.GetPath()
// Handle recursive directory deletion
if obj.IsDir() {
return d.deleteFolder(ctx, path)
}
// Delete single file
return d.deleteFile(ctx, path, false)
}
// Put uploads a file stream to Azure Blob Storage with progress tracking.
func (d *AzureBlob) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
blobPath := path.Join(dstDir.GetPath(), stream.GetName())
blobClient := d.containerClient.NewBlockBlobClient(blobPath)
// Determine optimal upload options based on file size
options := optimizedUploadOptions(stream.GetSize())
// Track upload progress
progressTracker := &progressTracker{
total: stream.GetSize(),
updateProgress: up,
}
// Wrap stream to handle context cancellation and progress tracking
limitedStream := driver.NewLimitedUploadStream(ctx, io.TeeReader(stream, progressTracker))
// Upload the stream to Azure Blob Storage
_, err := blobClient.UploadStream(ctx, limitedStream, options)
if err != nil {
return nil, fmt.Errorf("failed to upload file: %w", err)
}
return &model.Object{
Path: blobPath,
Name: stream.GetName(),
Size: stream.GetSize(),
Modified: time.Now(),
IsFolder: false,
}, nil
}
// The following methods related to archive handling are not implemented yet.
// func (d *AzureBlob) GetArchiveMeta(...) {...}
// func (d *AzureBlob) ListArchive(...) {...}
// func (d *AzureBlob) Extract(...) {...}
// func (d *AzureBlob) ArchiveDecompress(...) {...}
// Ensure AzureBlob implements the driver.Driver interface.
var _ driver.Driver = (*AzureBlob)(nil)

View File

@ -0,0 +1,32 @@
package azure_blob
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
Endpoint string `json:"endpoint" required:"true" default:"https://<accountname>.blob.core.windows.net/" help:"e.g. https://accountname.blob.core.windows.net/. The full endpoint URL for Azure Storage, including the unique storage account name (3 ~ 24 numbers and lowercase letters only)."`
AccessKey string `json:"access_key" required:"true" help:"The access key for Azure Storage, used for authentication. https://learn.microsoft.com/azure/storage/common/storage-account-keys-manage"`
ContainerName string `json:"container_name" required:"true" help:"The name of the container in Azure Storage (created in the Azure portal). https://learn.microsoft.com/azure/storage/blobs/blob-containers-portal"`
SignURLExpire int `json:"sign_url_expire" type:"number" default:"4" help:"The expiration time for SAS URLs, in hours."`
}
// implement GetRootId interface
func (r Addition) GetRootId() string {
return r.ContainerName
}
var config = driver.Config{
Name: "Azure Blob Storage",
LocalSort: true,
CheckStatus: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &AzureBlob{
config: config,
}
})
}

View File

@ -0,0 +1,20 @@
package azure_blob
import "github.com/alist-org/alist/v3/internal/driver"
// progressTracker is used to track upload progress
type progressTracker struct {
total int64
current int64
updateProgress driver.UpdateProgress
}
// Write implements io.Writer to track progress
func (pt *progressTracker) Write(p []byte) (n int, err error) {
n = len(p)
pt.current += int64(n)
if pt.updateProgress != nil && pt.total > 0 {
pt.updateProgress(float64(pt.current) * 100 / float64(pt.total))
}
return n, nil
}

401
drivers/azure_blob/util.go Normal file
View File

@ -0,0 +1,401 @@
package azure_blob
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"path"
"sort"
"strings"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/sas"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/service"
log "github.com/sirupsen/logrus"
)
const (
// MaxRetries defines the maximum number of retry attempts for Azure operations
MaxRetries = 3
// RetryDelay defines the base delay between retries
RetryDelay = 3 * time.Second
// MaxBatchSize defines the maximum number of operations in a single batch request
MaxBatchSize = 128
)
// extractAccountName 从 Azure 存储 Endpoint 中提取账户名
func extractAccountName(endpoint string) string {
// 移除协议前缀
endpoint = strings.TrimPrefix(endpoint, "https://")
endpoint = strings.TrimPrefix(endpoint, "http://")
// 获取第一个点之前的部分(即账户名)
parts := strings.Split(endpoint, ".")
if len(parts) > 0 {
// to lower case
return strings.ToLower(parts[0])
}
return ""
}
// isNotFoundError checks if the error is a "not found" type error
func isNotFoundError(err error) bool {
var storageErr *azcore.ResponseError
if errors.As(err, &storageErr) {
return storageErr.StatusCode == 404
}
// Fallback to string matching for backwards compatibility
return err != nil && strings.Contains(err.Error(), "BlobNotFound")
}
// flattenListBlobs - Optimize blob listing to handle pagination better
func (d *AzureBlob) flattenListBlobs(ctx context.Context, prefix string) ([]container.BlobItem, error) {
// Standardize prefix format
prefix = ensureTrailingSlash(prefix)
var blobItems []container.BlobItem
pager := d.containerClient.NewListBlobsFlatPager(&container.ListBlobsFlatOptions{
Prefix: &prefix,
Include: container.ListBlobsInclude{
Metadata: true,
},
})
for pager.More() {
page, err := pager.NextPage(ctx)
if err != nil {
return nil, fmt.Errorf("failed to list blobs: %w", err)
}
for _, blob := range page.Segment.BlobItems {
blobItems = append(blobItems, *blob)
}
}
return blobItems, nil
}
// batchDeleteBlobs - Simplify batch deletion logic
func (d *AzureBlob) batchDeleteBlobs(ctx context.Context, blobPaths []string) error {
if len(blobPaths) == 0 {
return nil
}
// Process in batches of MaxBatchSize
for i := 0; i < len(blobPaths); i += MaxBatchSize {
end := min(i+MaxBatchSize, len(blobPaths))
currentBatch := blobPaths[i:end]
// Create batch builder
batchBuilder, err := d.containerClient.NewBatchBuilder()
if err != nil {
return fmt.Errorf("failed to create batch builder: %w", err)
}
// Add delete operations
for _, blobPath := range currentBatch {
if err := batchBuilder.Delete(blobPath, nil); err != nil {
return fmt.Errorf("failed to add delete operation for %s: %w", blobPath, err)
}
}
// Submit batch
responses, err := d.containerClient.SubmitBatch(ctx, batchBuilder, nil)
if err != nil {
return fmt.Errorf("batch delete request failed: %w", err)
}
// Check responses
for _, resp := range responses.Responses {
if resp.Error != nil && !isNotFoundError(resp.Error) {
// 获取 blob 名称以提供更好的错误信息
blobName := "unknown"
if resp.BlobName != nil {
blobName = *resp.BlobName
}
return fmt.Errorf("failed to delete blob %s: %v", blobName, resp.Error)
}
}
}
return nil
}
// deleteFolder recursively deletes a directory and all its contents
func (d *AzureBlob) deleteFolder(ctx context.Context, prefix string) error {
// Ensure directory path ends with slash
prefix = ensureTrailingSlash(prefix)
// Get all blobs under the directory using flattenListBlobs
globs, err := d.flattenListBlobs(ctx, prefix)
if err != nil {
return fmt.Errorf("failed to list blobs for deletion: %w", err)
}
// If there are blobs in the directory, delete them
if len(globs) > 0 {
// 分离文件和目录标记
var filePaths []string
var dirPaths []string
for _, blob := range globs {
blobName := *blob.Name
if isDirectory(blob) {
// remove trailing slash for directory names
dirPaths = append(dirPaths, strings.TrimSuffix(blobName, "/"))
} else {
filePaths = append(filePaths, blobName)
}
}
// 先删除文件,再删除目录
if len(filePaths) > 0 {
if err := d.batchDeleteBlobs(ctx, filePaths); err != nil {
return err
}
}
if len(dirPaths) > 0 {
// 按路径深度分组
depthMap := make(map[int][]string)
for _, dir := range dirPaths {
depth := strings.Count(dir, "/") // 计算目录深度
depthMap[depth] = append(depthMap[depth], dir)
}
// 按深度从大到小排序
var depths []int
for depth := range depthMap {
depths = append(depths, depth)
}
sort.Sort(sort.Reverse(sort.IntSlice(depths)))
// 按深度逐层批量删除
for _, depth := range depths {
batch := depthMap[depth]
if err := d.batchDeleteBlobs(ctx, batch); err != nil {
return err
}
}
}
}
// 最后删除目录标记本身
return d.deleteEmptyDirectory(ctx, prefix)
}
// deleteFile deletes a single file or blob with better error handling
func (d *AzureBlob) deleteFile(ctx context.Context, path string, isDir bool) error {
blobClient := d.containerClient.NewBlobClient(path)
_, err := blobClient.Delete(ctx, nil)
if err != nil && !(isDir && isNotFoundError(err)) {
return err
}
return nil
}
// copyFile copies a single blob from source path to destination path
func (d *AzureBlob) copyFile(ctx context.Context, srcPath, dstPath string) error {
srcBlob := d.containerClient.NewBlobClient(srcPath)
dstBlob := d.containerClient.NewBlobClient(dstPath)
// Use configured expiration time for SAS URL
expireDuration := time.Hour * time.Duration(d.SignURLExpire)
srcURL, err := srcBlob.GetSASURL(sas.BlobPermissions{Read: true}, time.Now().Add(expireDuration), nil)
if err != nil {
return fmt.Errorf("failed to generate source SAS URL: %w", err)
}
_, err = dstBlob.StartCopyFromURL(ctx, srcURL, nil)
return err
}
// createContainerIfNotExists - Create container if not exists
// Clean up commented code
func (d *AzureBlob) createContainerIfNotExists(ctx context.Context, containerName string) error {
serviceClient := d.client.ServiceClient()
containerClient := serviceClient.NewContainerClient(containerName)
var options = service.CreateContainerOptions{}
_, err := containerClient.Create(ctx, &options)
if err != nil {
var responseErr *azcore.ResponseError
if errors.As(err, &responseErr) && responseErr.ErrorCode != "ContainerAlreadyExists" {
return fmt.Errorf("failed to create or access container [%s]: %w", containerName, err)
}
}
d.containerClient = containerClient
return nil
}
// mkDir creates a virtual directory marker by uploading an empty blob with metadata.
func (d *AzureBlob) mkDir(ctx context.Context, fullDirName string) error {
dirPath := ensureTrailingSlash(fullDirName)
blobClient := d.containerClient.NewBlockBlobClient(dirPath)
// Upload an empty blob with metadata indicating it's a directory
_, err := blobClient.Upload(ctx, struct {
*bytes.Reader
io.Closer
}{
Reader: bytes.NewReader([]byte{}),
Closer: io.NopCloser(nil),
}, &blockblob.UploadOptions{
Metadata: map[string]*string{
"hdi_isfolder": to.Ptr("true"),
},
})
return err
}
// ensureTrailingSlash ensures the provided path ends with a trailing slash.
func ensureTrailingSlash(path string) string {
if !strings.HasSuffix(path, "/") {
return path + "/"
}
return path
}
// moveOrRename moves or renames blobs or directories from source to destination.
func (d *AzureBlob) moveOrRename(ctx context.Context, srcPath, dstPath string, isDir bool, srcSize int64) error {
if isDir {
// Normalize paths for directory operations
srcPath = ensureTrailingSlash(srcPath)
dstPath = ensureTrailingSlash(dstPath)
// List all blobs under the source directory
blobs, err := d.flattenListBlobs(ctx, srcPath)
if err != nil {
return fmt.Errorf("failed to list blobs: %w", err)
}
// Iterate and copy each blob to the destination
for _, item := range blobs {
srcBlobName := *item.Name
relPath := strings.TrimPrefix(srcBlobName, srcPath)
itemDstPath := path.Join(dstPath, relPath)
if isDirectory(item) {
// Create directory marker at destination
if err := d.mkDir(ctx, itemDstPath); err != nil {
return fmt.Errorf("failed to create directory marker [%s]: %w", itemDstPath, err)
}
} else {
// Copy file blob to destination
if err := d.copyFile(ctx, srcBlobName, itemDstPath); err != nil {
return fmt.Errorf("failed to copy blob [%s]: %w", srcBlobName, err)
}
}
}
// Handle empty directories by creating a marker at destination
if len(blobs) == 0 {
if err := d.mkDir(ctx, dstPath); err != nil {
return fmt.Errorf("failed to create directory [%s]: %w", dstPath, err)
}
}
// Delete source directory and its contents
if err := d.deleteFolder(ctx, srcPath); err != nil {
log.Warnf("failed to delete source directory [%s]: %v\n, and try again", srcPath, err)
// Retry deletion once more and ignore the result
if err := d.deleteFolder(ctx, srcPath); err != nil {
log.Errorf("Retry deletion of source directory [%s] failed: %v", srcPath, err)
}
}
return nil
}
// Single file move or rename operation
if err := d.copyFile(ctx, srcPath, dstPath); err != nil {
return fmt.Errorf("failed to copy file: %w", err)
}
// Delete source file after successful copy
if err := d.deleteFile(ctx, srcPath, false); err != nil {
log.Errorf("Error deleting source file [%s]: %v", srcPath, err)
}
return nil
}
// optimizedUploadOptions returns the optimal upload options based on file size
func optimizedUploadOptions(fileSize int64) *azblob.UploadStreamOptions {
options := &azblob.UploadStreamOptions{
BlockSize: 4 * 1024 * 1024, // 4MB block size
Concurrency: 4, // Default concurrency
}
// For large files, increase block size and concurrency
if fileSize > 256*1024*1024 { // For files larger than 256MB
options.BlockSize = 8 * 1024 * 1024 // 8MB blocks
options.Concurrency = 8 // More concurrent uploads
}
// For very large files (>1GB)
if fileSize > 1024*1024*1024 {
options.BlockSize = 16 * 1024 * 1024 // 16MB blocks
options.Concurrency = 16 // Higher concurrency
}
return options
}
// isDirectory determines if a blob represents a directory
// Checks multiple indicators: path suffix, metadata, and content type
func isDirectory(blob container.BlobItem) bool {
// Check path suffix
if strings.HasSuffix(*blob.Name, "/") {
return true
}
// Check metadata for directory marker
if blob.Metadata != nil {
if val, ok := blob.Metadata["hdi_isfolder"]; ok && val != nil && *val == "true" {
return true
}
// Azure Storage Explorer and other tools may use different metadata keys
if val, ok := blob.Metadata["is_directory"]; ok && val != nil && strings.ToLower(*val) == "true" {
return true
}
}
// Check content type (some tools mark directories with specific content types)
if blob.Properties != nil && blob.Properties.ContentType != nil {
contentType := strings.ToLower(*blob.Properties.ContentType)
if blob.Properties.ContentLength != nil && *blob.Properties.ContentLength == 0 && (contentType == "application/directory" || contentType == "directory") {
return true
}
}
return false
}
// deleteEmptyDirectory deletes a directory only if it's empty
func (d *AzureBlob) deleteEmptyDirectory(ctx context.Context, dirPath string) error {
// Directory is empty, delete the directory marker
blobClient := d.containerClient.NewBlobClient(strings.TrimSuffix(dirPath, "/"))
_, err := blobClient.Delete(ctx, nil)
// Also try deleting with trailing slash (for different directory marker formats)
if err != nil && isNotFoundError(err) {
blobClient = d.containerClient.NewBlobClient(dirPath)
_, err = blobClient.Delete(ctx, nil)
}
// Ignore not found errors
if err != nil && isNotFoundError(err) {
log.Infof("Directory [%s] not found during deletion: %v", dirPath, err)
return nil
}
return err
}

View File

@ -6,8 +6,8 @@ import (
"encoding/hex"
"errors"
"io"
"math"
"net/url"
"os"
stdpath "path"
"strconv"
"time"
@ -15,11 +15,13 @@ import (
"golang.org/x/sync/semaphore"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/errgroup"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/avast/retry-go"
log "github.com/sirupsen/logrus"
)
@ -77,6 +79,8 @@ func (d *BaiduNetdisk) List(ctx context.Context, dir model.Obj, args model.ListA
func (d *BaiduNetdisk) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
if d.DownloadAPI == "crack" {
return d.linkCrack(file, args)
} else if d.DownloadAPI == "crack_video" {
return d.linkCrackVideo(file, args)
}
return d.linkOfficial(file, args)
}
@ -182,16 +186,30 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
return newObj, nil
}
tempFile, err := stream.CacheFullInTempFile()
if err != nil {
return nil, err
var (
cache = stream.GetFile()
tmpF *os.File
err error
)
if _, ok := cache.(io.ReaderAt); !ok {
tmpF, err = os.CreateTemp(conf.Conf.TempDir, "file-*")
if err != nil {
return nil, err
}
defer func() {
_ = tmpF.Close()
_ = os.Remove(tmpF.Name())
}()
cache = tmpF
}
streamSize := stream.GetSize()
sliceSize := d.getSliceSize(streamSize)
count := int(math.Max(math.Ceil(float64(streamSize)/float64(sliceSize)), 1))
count := int(streamSize / sliceSize)
lastBlockSize := streamSize % sliceSize
if streamSize > 0 && lastBlockSize == 0 {
if lastBlockSize > 0 {
count++
} else {
lastBlockSize = sliceSize
}
@ -204,6 +222,11 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
sliceMd5H := md5.New()
sliceMd5H2 := md5.New()
slicemd5H2Write := utils.LimitWriter(sliceMd5H2, SliceSize)
writers := []io.Writer{fileMd5H, sliceMd5H, slicemd5H2Write}
if tmpF != nil {
writers = append(writers, tmpF)
}
written := int64(0)
for i := 1; i <= count; i++ {
if utils.IsCanceled(ctx) {
@ -212,13 +235,23 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
if i == count {
byteSize = lastBlockSize
}
_, err := utils.CopyWithBufferN(io.MultiWriter(fileMd5H, sliceMd5H, slicemd5H2Write), tempFile, byteSize)
n, err := utils.CopyWithBufferN(io.MultiWriter(writers...), stream, byteSize)
written += n
if err != nil && err != io.EOF {
return nil, err
}
blockList = append(blockList, hex.EncodeToString(sliceMd5H.Sum(nil)))
sliceMd5H.Reset()
}
if tmpF != nil {
if written != streamSize {
return nil, errs.NewErr(err, "CreateTempFile failed, incoming stream actual size= %d, expect = %d ", written, streamSize)
}
_, err = tmpF.Seek(0, io.SeekStart)
if err != nil {
return nil, errs.NewErr(err, "CreateTempFile failed, can't seek to 0 ")
}
}
contentMd5 := hex.EncodeToString(fileMd5H.Sum(nil))
sliceMd5 := hex.EncodeToString(sliceMd5H2.Sum(nil))
blockListStr, _ := utils.Json.MarshalToString(blockList)
@ -260,21 +293,24 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
}
}
// step.2 上传分片
threadG, upCtx := errgroup.NewGroupWithContext(ctx, d.uploadThread)
threadG, upCtx := errgroup.NewGroupWithContext(ctx, d.uploadThread,
retry.Attempts(1),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
sem := semaphore.NewWeighted(3)
for i, partseq := range precreateResp.BlockList {
if utils.IsCanceled(upCtx) {
break
}
if err = sem.Acquire(ctx, 1); err != nil {
break
}
i, partseq, offset, byteSize := i, partseq, int64(partseq)*sliceSize, sliceSize
if partseq+1 == count {
byteSize = lastBlockSize
}
threadG.Go(func(ctx context.Context) error {
if err = sem.Acquire(ctx, 1); err != nil {
return err
}
defer sem.Release(1)
params := map[string]string{
"method": "upload",
@ -285,7 +321,7 @@ func (d *BaiduNetdisk) Put(ctx context.Context, dstDir model.Obj, stream model.F
"partseq": strconv.Itoa(partseq),
}
err := d.uploadSlice(ctx, params, stream.GetName(),
driver.NewLimitedUploadStream(ctx, io.NewSectionReader(tempFile, offset, byteSize)))
driver.NewLimitedUploadStream(ctx, io.NewSectionReader(cache, offset, byteSize)))
if err != nil {
return err
}

View File

@ -10,7 +10,7 @@ type Addition struct {
driver.RootPath
OrderBy string `json:"order_by" type:"select" options:"name,time,size" default:"name"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc"`
DownloadAPI string `json:"download_api" type:"select" options:"official,crack" default:"official"`
DownloadAPI string `json:"download_api" type:"select" options:"official,crack,crack_video" default:"official"`
ClientID string `json:"client_id" required:"true" default:"iYCeC9g08h5vuP9UqvPHKKSVrKFXGa1v"`
ClientSecret string `json:"client_secret" required:"true" default:"jXiFMOPVPCWlO2M5CwWQzffpNPaGTRBG"`
CustomCrackUA string `json:"custom_crack_ua" required:"true" default:"netdisk"`
@ -19,6 +19,7 @@ type Addition struct {
UploadAPI string `json:"upload_api" default:"https://d.pcs.baidu.com"`
CustomUploadPartSize int64 `json:"custom_upload_part_size" type:"number" default:"0" help:"0 for auto"`
LowBandwithUploadMode bool `json:"low_bandwith_upload_mode" default:"false"`
OnlyListVideoFile bool `json:"only_list_video_file" default:"false"`
}
var config = driver.Config{

View File

@ -17,7 +17,7 @@ type TokenErrResp struct {
type File struct {
//TkbindId int `json:"tkbind_id"`
//OwnerType int `json:"owner_type"`
//Category int `json:"category"`
Category int `json:"category"`
//RealCategory string `json:"real_category"`
FsId int64 `json:"fs_id"`
//OperId int `json:"oper_id"`

View File

@ -79,6 +79,12 @@ func (d *BaiduNetdisk) request(furl string, method string, callback base.ReqCall
return retry.Unrecoverable(err2)
}
}
if 31023 == errno && d.DownloadAPI == "crack_video" {
result = res.Body()
return nil
}
return fmt.Errorf("req: [%s] ,errno: %d, refer to https://pan.baidu.com/union/doc/", furl, errno)
}
result = res.Body()
@ -131,7 +137,16 @@ func (d *BaiduNetdisk) getFiles(dir string) ([]File, error) {
if len(resp.List) == 0 {
break
}
res = append(res, resp.List...)
if d.OnlyListVideoFile {
for _, file := range resp.List {
if file.Isdir == 1 || file.Category == 1 {
res = append(res, file)
}
}
} else {
res = append(res, resp.List...)
}
}
return res, nil
}
@ -187,6 +202,34 @@ func (d *BaiduNetdisk) linkCrack(file model.Obj, _ model.LinkArgs) (*model.Link,
}, nil
}
func (d *BaiduNetdisk) linkCrackVideo(file model.Obj, _ model.LinkArgs) (*model.Link, error) {
param := map[string]string{
"type": "VideoURL",
"path": fmt.Sprintf("%s", file.GetPath()),
"fs_id": file.GetID(),
"devuid": "0%1",
"clienttype": "1",
"channel": "android_15_25010PN30C_bd-netdisk_1523a",
"nom3u8": "1",
"dlink": "1",
"media": "1",
"origin": "dlna",
}
resp, err := d.request("https://pan.baidu.com/api/mediainfo", http.MethodGet, func(req *resty.Request) {
req.SetQueryParams(param)
}, nil)
if err != nil {
return nil, err
}
return &model.Link{
URL: utils.Json.Get(resp, "info", "dlink").ToString(),
Header: http.Header{
"User-Agent": []string{d.CustomCrackUA},
},
}, nil
}
func (d *BaiduNetdisk) manage(opera string, filelist any) ([]byte, error) {
params := map[string]string{
"method": "filemanager",

View File

@ -7,7 +7,7 @@ import (
"errors"
"fmt"
"io"
"math"
"os"
"regexp"
"strconv"
"strings"
@ -16,6 +16,7 @@ import (
"golang.org/x/sync/semaphore"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
@ -241,11 +242,21 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
// TODO:
// 暂时没有找到妙传方式
// 需要获取完整文件md5,必须支持 io.Seek
tempFile, err := stream.CacheFullInTempFile()
if err != nil {
return nil, err
var (
cache = stream.GetFile()
tmpF *os.File
err error
)
if _, ok := cache.(io.ReaderAt); !ok {
tmpF, err = os.CreateTemp(conf.Conf.TempDir, "file-*")
if err != nil {
return nil, err
}
defer func() {
_ = tmpF.Close()
_ = os.Remove(tmpF.Name())
}()
cache = tmpF
}
const DEFAULT int64 = 1 << 22
@ -253,9 +264,11 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
// 计算需要的数据
streamSize := stream.GetSize()
count := int(math.Ceil(float64(streamSize) / float64(DEFAULT)))
count := int(streamSize / DEFAULT)
lastBlockSize := streamSize % DEFAULT
if lastBlockSize == 0 {
if lastBlockSize > 0 {
count++
} else {
lastBlockSize = DEFAULT
}
@ -266,6 +279,11 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
sliceMd5H := md5.New()
sliceMd5H2 := md5.New()
slicemd5H2Write := utils.LimitWriter(sliceMd5H2, SliceSize)
writers := []io.Writer{fileMd5H, sliceMd5H, slicemd5H2Write}
if tmpF != nil {
writers = append(writers, tmpF)
}
written := int64(0)
for i := 1; i <= count; i++ {
if utils.IsCanceled(ctx) {
return nil, ctx.Err()
@ -273,13 +291,23 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
if i == count {
byteSize = lastBlockSize
}
_, err := utils.CopyWithBufferN(io.MultiWriter(fileMd5H, sliceMd5H, slicemd5H2Write), tempFile, byteSize)
n, err := utils.CopyWithBufferN(io.MultiWriter(writers...), stream, byteSize)
written += n
if err != nil && err != io.EOF {
return nil, err
}
sliceMD5List = append(sliceMD5List, hex.EncodeToString(sliceMd5H.Sum(nil)))
sliceMd5H.Reset()
}
if tmpF != nil {
if written != streamSize {
return nil, errs.NewErr(err, "CreateTempFile failed, incoming stream actual size= %d, expect = %d ", written, streamSize)
}
_, err = tmpF.Seek(0, io.SeekStart)
if err != nil {
return nil, errs.NewErr(err, "CreateTempFile failed, can't seek to 0 ")
}
}
contentMd5 := hex.EncodeToString(fileMd5H.Sum(nil))
sliceMd5 := hex.EncodeToString(sliceMd5H2.Sum(nil))
blockListStr, _ := utils.Json.MarshalToString(sliceMD5List)
@ -291,7 +319,7 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
"rtype": "1",
"ctype": "11",
"path": fmt.Sprintf("/%s", stream.GetName()),
"size": fmt.Sprint(stream.GetSize()),
"size": fmt.Sprint(streamSize),
"slice-md5": sliceMd5,
"content-md5": contentMd5,
"block_list": blockListStr,
@ -321,9 +349,6 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
if utils.IsCanceled(upCtx) {
break
}
if err = sem.Acquire(ctx, 1); err != nil {
break
}
i, partseq, offset, byteSize := i, partseq, int64(partseq)*DEFAULT, DEFAULT
if partseq+1 == count {
@ -331,6 +356,9 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
}
threadG.Go(func(ctx context.Context) error {
if err = sem.Acquire(ctx, 1); err != nil {
return err
}
defer sem.Release(1)
uploadParams := map[string]string{
"method": "upload",
@ -343,7 +371,7 @@ func (d *BaiduPhoto) Put(ctx context.Context, dstDir model.Obj, stream model.Fil
r.SetContext(ctx)
r.SetQueryParams(uploadParams)
r.SetFileReader("file", stream.GetName(),
driver.NewLimitedUploadStream(ctx, io.NewSectionReader(tempFile, offset, byteSize)))
driver.NewLimitedUploadStream(ctx, io.NewSectionReader(cache, offset, byteSize)))
}, nil)
if err != nil {
return err

View File

@ -1,13 +1,10 @@
package cloudreve
import (
"bytes"
"context"
"errors"
"io"
"net/http"
"path"
"strconv"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
@ -21,6 +18,7 @@ import (
type Cloudreve struct {
model.Storage
Addition
ref *Cloudreve
}
func (d *Cloudreve) Config() driver.Config {
@ -40,8 +38,18 @@ func (d *Cloudreve) Init(ctx context.Context) error {
return d.login()
}
func (d *Cloudreve) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*Cloudreve)
if ok {
d.ref = refStorage
return nil
}
return errs.NotSupport
}
func (d *Cloudreve) Drop(ctx context.Context) error {
d.Cookie = ""
d.ref = nil
return nil
}
@ -165,42 +173,18 @@ func (d *Cloudreve) Put(ctx context.Context, dstDir model.Obj, stream model.File
switch r.Policy.Type {
case "onedrive":
err = d.upOneDrive(ctx, stream, u, up)
case "s3":
err = d.upS3(ctx, stream, u, up)
case "remote": // 从机存储
err = d.upRemote(ctx, stream, u, up)
case "local": // 本机存储
var chunkSize = u.ChunkSize
var buf []byte
var chunk int
for {
var n int
buf = make([]byte, chunkSize)
n, err = io.ReadAtLeast(stream, buf, chunkSize)
if err != nil && !errors.Is(err, io.ErrUnexpectedEOF) {
if err == io.EOF {
return nil
}
return err
}
if n == 0 {
break
}
buf = buf[:n]
err = d.request(http.MethodPost, "/file/upload/"+u.SessionID+"/"+strconv.Itoa(chunk), func(req *resty.Request) {
req.SetHeader("Content-Type", "application/octet-stream")
req.SetHeader("Content-Length", strconv.Itoa(n))
req.SetBody(driver.NewLimitedUploadStream(ctx, bytes.NewReader(buf)))
}, nil)
if err != nil {
break
}
chunk++
}
err = d.upLocal(ctx, stream, u, up)
default:
err = errs.NotImplement
}
if err != nil {
// 删除失败的会话
err = d.request(http.MethodDelete, "/file/upload/"+u.SessionID, nil, nil)
_ = d.request(http.MethodDelete, "/file/upload/"+u.SessionID, nil, nil)
return err
}
return nil

View File

@ -21,11 +21,12 @@ type Policy struct {
}
type UploadInfo struct {
SessionID string `json:"sessionID"`
ChunkSize int `json:"chunkSize"`
Expires int `json:"expires"`
UploadURLs []string `json:"uploadURLs"`
Credential string `json:"credential,omitempty"`
SessionID string `json:"sessionID"`
ChunkSize int `json:"chunkSize"`
Expires int `json:"expires"`
UploadURLs []string `json:"uploadURLs"`
Credential string `json:"credential,omitempty"` // local
CompleteURL string `json:"completeURL,omitempty"` // s3
}
type DirectoryResp struct {

View File

@ -4,12 +4,14 @@ import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
@ -19,7 +21,6 @@ import (
"github.com/alist-org/alist/v3/pkg/cookie"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
json "github.com/json-iterator/go"
jsoniter "github.com/json-iterator/go"
)
@ -27,17 +28,23 @@ import (
const loginPath = "/user/session"
func (d *Cloudreve) request(method string, path string, callback base.ReqCallback, out interface{}) error {
u := d.Address + "/api/v3" + path
ua := d.CustomUA
if ua == "" {
ua = base.UserAgent
func (d *Cloudreve) getUA() string {
if d.CustomUA != "" {
return d.CustomUA
}
return base.UserAgent
}
func (d *Cloudreve) request(method string, path string, callback base.ReqCallback, out interface{}) error {
if d.ref != nil {
return d.ref.request(method, path, callback, out)
}
u := d.Address + "/api/v3" + path
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"Cookie": "cloudreve-session=" + d.Cookie,
"Accept": "application/json, text/plain, */*",
"User-Agent": ua,
"User-Agent": d.getUA(),
})
var r Resp
@ -76,11 +83,11 @@ func (d *Cloudreve) request(method string, path string, callback base.ReqCallbac
}
if out != nil && r.Data != nil {
var marshal []byte
marshal, err = json.Marshal(r.Data)
marshal, err = jsoniter.Marshal(r.Data)
if err != nil {
return err
}
err = json.Unmarshal(marshal, out)
err = jsoniter.Unmarshal(marshal, out)
if err != nil {
return err
}
@ -161,15 +168,11 @@ func (d *Cloudreve) GetThumb(file Object) (model.Thumbnail, error) {
if !d.Addition.EnableThumbAndFolderSize {
return model.Thumbnail{}, nil
}
ua := d.CustomUA
if ua == "" {
ua = base.UserAgent
}
req := base.NoRedirectClient.R()
req.SetHeaders(map[string]string{
"Cookie": "cloudreve-session=" + d.Cookie,
"Accept": "image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8",
"User-Agent": ua,
"User-Agent": d.getUA(),
})
resp, err := req.Execute(http.MethodGet, d.Address+"/api/v3/file/thumb/"+file.Id)
if err != nil {
@ -180,9 +183,7 @@ func (d *Cloudreve) GetThumb(file Object) (model.Thumbnail, error) {
}, nil
}
func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
uploadUrl := u.UploadURLs[0]
credential := u.Credential
func (d *Cloudreve) upLocal(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
var finish int64 = 0
var chunk int = 0
DEFAULT := int64(u.ChunkSize)
@ -190,12 +191,64 @@ func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u U
if utils.IsCanceled(ctx) {
return ctx.Err()
}
utils.Log.Debugf("[Cloudreve-Remote] upload: %d", finish)
var byteSize = DEFAULT
left := stream.GetSize() - finish
if left < DEFAULT {
byteSize = left
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-Local] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
err = d.request(http.MethodPost, "/file/upload/"+u.SessionID+"/"+strconv.Itoa(chunk), func(req *resty.Request) {
req.SetHeader("Content-Type", "application/octet-stream")
req.SetContentLength(true)
req.SetHeader("Content-Length", strconv.FormatInt(byteSize, 10))
req.SetHeader("User-Agent", d.getUA())
req.SetBody(driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
req.AddRetryCondition(func(r *resty.Response, err error) bool {
if err != nil {
return true
}
if r.IsError() {
return true
}
var retryResp Resp
jErr := base.RestyClient.JSONUnmarshal(r.Body(), &retryResp)
if jErr != nil {
return true
}
if retryResp.Code != 0 {
return true
}
return false
})
}, nil)
if err != nil {
return err
}
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++
}
return nil
}
func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
uploadUrl := u.UploadURLs[0]
credential := u.Credential
var finish int64 = 0
var chunk int = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := stream.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-Remote] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
@ -203,7 +256,7 @@ func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u U
return err
}
req, err := http.NewRequest("POST", uploadUrl+"?chunk="+strconv.Itoa(chunk),
driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
@ -211,14 +264,44 @@ func (d *Cloudreve) upRemote(ctx context.Context, stream model.FileStreamer, u U
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Authorization", fmt.Sprint(credential))
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
req.Header.Set("User-Agent", d.getUA())
err = func() error {
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != 200 {
return errors.New(res.Status)
}
body, err := io.ReadAll(res.Body)
if err != nil {
return err
}
var up Resp
err = json.Unmarshal(body, &up)
if err != nil {
return err
}
if up.Code != 0 {
return errors.New(up.Msg)
}
return nil
}()
if err == nil {
retryCount = 0
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++
} else {
retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors, error: %s", maxRetries, err)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-Remote] server errors while uploading, retrying after %v...", backoff)
time.Sleep(backoff)
}
_ = res.Body.Close()
up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++
}
return nil
}
@ -227,23 +310,22 @@ func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u
uploadUrl := u.UploadURLs[0]
var finish int64 = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
utils.Log.Debugf("[Cloudreve-OneDrive] upload: %d", finish)
var byteSize = DEFAULT
left := stream.GetSize() - finish
if left < DEFAULT {
byteSize = left
}
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-OneDrive] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
@ -251,24 +333,128 @@ func (d *Cloudreve) upOneDrive(ctx context.Context, stream model.FileStreamer, u
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", finish, finish+byteSize-1, stream.GetSize()))
req.Header.Set("User-Agent", d.getUA())
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
// https://learn.microsoft.com/zh-cn/onedrive/developer/rest-api/api/driveitem_createuploadsession
if res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200 {
switch {
case res.StatusCode >= 500 && res.StatusCode <= 504:
retryCount++
if retryCount > maxRetries {
res.Body.Close()
return fmt.Errorf("upload failed after %d retries due to server errors, error %d", maxRetries, res.StatusCode)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-OneDrive] server errors %d while uploading, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200:
data, _ := io.ReadAll(res.Body)
_ = res.Body.Close()
res.Body.Close()
return errors.New(string(data))
default:
res.Body.Close()
retryCount = 0
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
}
_ = res.Body.Close()
up(float64(finish) * 100 / float64(stream.GetSize()))
}
// 上传成功发送回调请求
err := d.request(http.MethodPost, "/callback/onedrive/finish/"+u.SessionID, func(req *resty.Request) {
return d.request(http.MethodPost, "/callback/onedrive/finish/"+u.SessionID, func(req *resty.Request) {
req.SetBody("{}")
}, nil)
}
func (d *Cloudreve) upS3(ctx context.Context, stream model.FileStreamer, u UploadInfo, up driver.UpdateProgress) error {
var finish int64 = 0
var chunk int = 0
var etags []string
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := stream.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Cloudreve-S3] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("PUT", u.UploadURLs[chunk],
driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
etag := res.Header.Get("ETag")
res.Body.Close()
switch {
case res.StatusCode != 200:
retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors, error %d", maxRetries, res.StatusCode)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-S3] server errors %d while uploading, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case etag == "":
return errors.New("faild to get ETag from header")
default:
retryCount = 0
etags = append(etags, etag)
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
chunk++
}
}
// s3LikeFinishUpload
// https://github.com/cloudreve/frontend/blob/b485bf297974cbe4834d2e8e744ae7b7e5b2ad39/src/component/Uploader/core/api/index.ts#L204-L252
bodyBuilder := &strings.Builder{}
bodyBuilder.WriteString("<CompleteMultipartUpload>")
for i, etag := range etags {
bodyBuilder.WriteString(fmt.Sprintf(
`<Part><PartNumber>%d</PartNumber><ETag>%s</ETag></Part>`,
i+1, // PartNumber 从 1 开始
etag,
))
}
bodyBuilder.WriteString("</CompleteMultipartUpload>")
req, err := http.NewRequest(
"POST",
u.CompleteURL,
strings.NewReader(bodyBuilder.String()),
)
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/xml")
req.Header.Set("User-Agent", d.getUA())
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
body, _ := io.ReadAll(res.Body)
return fmt.Errorf("up status: %d, error: %s", res.StatusCode, string(body))
}
// 上传成功发送回调请求
err = d.request(http.MethodGet, "/callback/s3/"+u.SessionID, nil, nil)
if err != nil {
return err
}

View File

@ -0,0 +1,305 @@
package cloudreve_v4
import (
"context"
"errors"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
)
type CloudreveV4 struct {
model.Storage
Addition
ref *CloudreveV4
}
func (d *CloudreveV4) Config() driver.Config {
if d.ref != nil {
return d.ref.Config()
}
if d.EnableVersionUpload {
config.NoOverwriteUpload = false
}
return config
}
func (d *CloudreveV4) GetAddition() driver.Additional {
return &d.Addition
}
func (d *CloudreveV4) Init(ctx context.Context) error {
// removing trailing slash
d.Address = strings.TrimSuffix(d.Address, "/")
op.MustSaveDriverStorage(d)
if d.ref != nil {
return nil
}
if d.AccessToken == "" && d.RefreshToken != "" {
return d.refreshToken()
}
if d.Username != "" {
return d.login()
}
return nil
}
func (d *CloudreveV4) InitReference(storage driver.Driver) error {
refStorage, ok := storage.(*CloudreveV4)
if ok {
d.ref = refStorage
return nil
}
return errs.NotSupport
}
func (d *CloudreveV4) Drop(ctx context.Context) error {
d.ref = nil
return nil
}
func (d *CloudreveV4) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
const pageSize int = 100
var f []File
var r FileResp
params := map[string]string{
"page_size": strconv.Itoa(pageSize),
"uri": dir.GetPath(),
"order_by": d.OrderBy,
"order_direction": d.OrderDirection,
"page": "0",
}
for {
err := d.request(http.MethodGet, "/file", func(req *resty.Request) {
req.SetQueryParams(params)
}, &r)
if err != nil {
return nil, err
}
f = append(f, r.Files...)
if r.Pagination.NextToken == "" || len(r.Files) < pageSize {
break
}
params["next_page_token"] = r.Pagination.NextToken
}
return utils.SliceConvert(f, func(src File) (model.Obj, error) {
if d.EnableFolderSize && src.Type == 1 {
var ds FolderSummaryResp
err := d.request(http.MethodGet, "/file/info", func(req *resty.Request) {
req.SetQueryParam("uri", src.Path)
req.SetQueryParam("folder_summary", "true")
}, &ds)
if err == nil && ds.FolderSummary.Size > 0 {
src.Size = ds.FolderSummary.Size
}
}
var thumb model.Thumbnail
if d.EnableThumb && src.Type == 0 {
var t FileThumbResp
err := d.request(http.MethodGet, "/file/thumb", func(req *resty.Request) {
req.SetQueryParam("uri", src.Path)
}, &t)
if err == nil && t.URL != "" {
thumb = model.Thumbnail{
Thumbnail: t.URL,
}
}
}
return &model.ObjThumb{
Object: model.Object{
ID: src.ID,
Path: src.Path,
Name: src.Name,
Size: src.Size,
Modified: src.UpdatedAt,
Ctime: src.CreatedAt,
IsFolder: src.Type == 1,
},
Thumbnail: thumb,
}, nil
})
}
func (d *CloudreveV4) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
var url FileUrlResp
err := d.request(http.MethodPost, "/file/url", func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{file.GetPath()},
"download": true,
})
}, &url)
if err != nil {
return nil, err
}
if len(url.Urls) == 0 {
return nil, errors.New("server returns no url")
}
exp := time.Until(url.Expires)
return &model.Link{
URL: url.Urls[0].URL,
Expiration: &exp,
}, nil
}
func (d *CloudreveV4) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
return d.request(http.MethodPost, "/file/create", func(req *resty.Request) {
req.SetBody(base.Json{
"type": "folder",
"uri": parentDir.GetPath() + "/" + dirName,
"error_on_conflict": true,
})
}, nil)
}
func (d *CloudreveV4) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
return d.request(http.MethodPost, "/file/move", func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{srcObj.GetPath()},
"dst": dstDir.GetPath(),
"copy": false,
})
}, nil)
}
func (d *CloudreveV4) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
return d.request(http.MethodPost, "/file/create", func(req *resty.Request) {
req.SetBody(base.Json{
"new_name": newName,
"uri": srcObj.GetPath(),
})
}, nil)
}
func (d *CloudreveV4) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
return d.request(http.MethodPost, "/file/move", func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{srcObj.GetPath()},
"dst": dstDir.GetPath(),
"copy": true,
})
}, nil)
}
func (d *CloudreveV4) Remove(ctx context.Context, obj model.Obj) error {
return d.request(http.MethodDelete, "/file", func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{obj.GetPath()},
"unlink": false,
"skip_soft_delete": true,
})
}, nil)
}
func (d *CloudreveV4) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
if file.GetSize() == 0 {
// 空文件使用新建文件方法,避免上传卡锁
return d.request(http.MethodPost, "/file/create", func(req *resty.Request) {
req.SetBody(base.Json{
"type": "file",
"uri": dstDir.GetPath() + "/" + file.GetName(),
"error_on_conflict": true,
})
}, nil)
}
var p StoragePolicy
var r FileResp
var u FileUploadResp
var err error
params := map[string]string{
"page_size": "10",
"uri": dstDir.GetPath(),
"order_by": "created_at",
"order_direction": "asc",
"page": "0",
}
err = d.request(http.MethodGet, "/file", func(req *resty.Request) {
req.SetQueryParams(params)
}, &r)
if err != nil {
return err
}
p = r.StoragePolicy
body := base.Json{
"uri": dstDir.GetPath() + "/" + file.GetName(),
"size": file.GetSize(),
"policy_id": p.ID,
"last_modified": file.ModTime().UnixMilli(),
"mime_type": "",
}
if d.EnableVersionUpload {
body["entity_type"] = "version"
}
err = d.request(http.MethodPut, "/file/upload", func(req *resty.Request) {
req.SetBody(body)
}, &u)
if err != nil {
return err
}
if u.StoragePolicy.Relay {
err = d.upLocal(ctx, file, u, up)
} else {
switch u.StoragePolicy.Type {
case "local":
err = d.upLocal(ctx, file, u, up)
case "remote":
err = d.upRemote(ctx, file, u, up)
case "onedrive":
err = d.upOneDrive(ctx, file, u, up)
case "s3":
err = d.upS3(ctx, file, u, up)
default:
return errs.NotImplement
}
}
if err != nil {
// 删除失败的会话
_ = d.request(http.MethodDelete, "/file/upload", func(req *resty.Request) {
req.SetBody(base.Json{
"id": u.SessionID,
"uri": u.URI,
})
}, nil)
return err
}
return nil
}
func (d *CloudreveV4) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *CloudreveV4) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *CloudreveV4) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *CloudreveV4) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
//func (d *CloudreveV4) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*CloudreveV4)(nil)

View File

@ -0,0 +1,44 @@
package cloudreve_v4
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
driver.RootPath
// driver.RootID
// define other
Address string `json:"address" required:"true"`
Username string `json:"username"`
Password string `json:"password"`
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
CustomUA string `json:"custom_ua"`
EnableFolderSize bool `json:"enable_folder_size"`
EnableThumb bool `json:"enable_thumb"`
EnableVersionUpload bool `json:"enable_version_upload"`
OrderBy string `json:"order_by" type:"select" options:"name,size,updated_at,created_at" default:"name" required:"true"`
OrderDirection string `json:"order_direction" type:"select" options:"asc,desc" default:"asc" required:"true"`
}
var config = driver.Config{
Name: "Cloudreve V4",
LocalSort: false,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "cloudreve://my",
CheckStatus: true,
Alert: "",
NoOverwriteUpload: true,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &CloudreveV4{}
})
}

View File

@ -0,0 +1,164 @@
package cloudreve_v4
import (
"time"
"github.com/alist-org/alist/v3/internal/model"
)
type Object struct {
model.Object
StoragePolicy StoragePolicy
}
type Resp struct {
Code int `json:"code"`
Msg string `json:"msg"`
Data any `json:"data"`
}
type BasicConfigResp struct {
InstanceID string `json:"instance_id"`
// Title string `json:"title"`
// Themes string `json:"themes"`
// DefaultTheme string `json:"default_theme"`
User struct {
ID string `json:"id"`
// Nickname string `json:"nickname"`
// CreatedAt time.Time `json:"created_at"`
// Anonymous bool `json:"anonymous"`
Group struct {
ID string `json:"id"`
Name string `json:"name"`
Permission string `json:"permission"`
} `json:"group"`
} `json:"user"`
// Logo string `json:"logo"`
// LogoLight string `json:"logo_light"`
// CaptchaReCaptchaKey string `json:"captcha_ReCaptchaKey"`
CaptchaType string `json:"captcha_type"` // support 'normal' only
// AppPromotion bool `json:"app_promotion"`
}
type SiteLoginConfigResp struct {
LoginCaptcha bool `json:"login_captcha"`
Authn bool `json:"authn"`
}
type PrepareLoginResp struct {
WebauthnEnabled bool `json:"webauthn_enabled"`
PasswordEnabled bool `json:"password_enabled"`
}
type CaptchaResp struct {
Image string `json:"image"`
Ticket string `json:"ticket"`
}
type Token struct {
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
AccessExpires time.Time `json:"access_expires"`
RefreshExpires time.Time `json:"refresh_expires"`
}
type TokenResponse struct {
User struct {
ID string `json:"id"`
// Email string `json:"email"`
// Nickname string `json:"nickname"`
Status string `json:"status"`
// CreatedAt time.Time `json:"created_at"`
Group struct {
ID string `json:"id"`
Name string `json:"name"`
Permission string `json:"permission"`
// DirectLinkBatchSize int `json:"direct_link_batch_size"`
// TrashRetention int `json:"trash_retention"`
} `json:"group"`
// Language string `json:"language"`
} `json:"user"`
Token Token `json:"token"`
}
type File struct {
Type int `json:"type"` // 0: file, 1: folder
ID string `json:"id"`
Name string `json:"name"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
Size int64 `json:"size"`
Metadata interface{} `json:"metadata"`
Path string `json:"path"`
Capability string `json:"capability"`
Owned bool `json:"owned"`
PrimaryEntity string `json:"primary_entity"`
}
type StoragePolicy struct {
ID string `json:"id"`
Name string `json:"name"`
Type string `json:"type"`
MaxSize int64 `json:"max_size"`
Relay bool `json:"relay,omitempty"`
}
type Pagination struct {
Page int `json:"page"`
PageSize int `json:"page_size"`
IsCursor bool `json:"is_cursor"`
NextToken string `json:"next_token,omitempty"`
}
type Props struct {
Capability string `json:"capability"`
MaxPageSize int `json:"max_page_size"`
OrderByOptions []string `json:"order_by_options"`
OrderDirectionOptions []string `json:"order_direction_options"`
}
type FileResp struct {
Files []File `json:"files"`
Parent File `json:"parent"`
Pagination Pagination `json:"pagination"`
Props Props `json:"props"`
ContextHint string `json:"context_hint"`
MixedType bool `json:"mixed_type"`
StoragePolicy StoragePolicy `json:"storage_policy"`
}
type FileUrlResp struct {
Urls []struct {
URL string `json:"url"`
} `json:"urls"`
Expires time.Time `json:"expires"`
}
type FileUploadResp struct {
// UploadID string `json:"upload_id"`
SessionID string `json:"session_id"`
ChunkSize int64 `json:"chunk_size"`
Expires int64 `json:"expires"`
StoragePolicy StoragePolicy `json:"storage_policy"`
URI string `json:"uri"`
CompleteURL string `json:"completeURL,omitempty"` // for S3-like
CallbackSecret string `json:"callback_secret,omitempty"` // for S3-like, OneDrive
UploadUrls []string `json:"upload_urls,omitempty"` // for not-local
Credential string `json:"credential,omitempty"` // for local
}
type FileThumbResp struct {
URL string `json:"url"`
Expires time.Time `json:"expires"`
}
type FolderSummaryResp struct {
File
FolderSummary struct {
Size int64 `json:"size"`
Files int64 `json:"files"`
Folders int64 `json:"folders"`
Completed bool `json:"completed"`
CalculatedAt time.Time `json:"calculated_at"`
} `json:"folder_summary"`
}

View File

@ -0,0 +1,476 @@
package cloudreve_v4
import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/conf"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/setting"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go"
)
// do others that not defined in Driver interface
func (d *CloudreveV4) getUA() string {
if d.CustomUA != "" {
return d.CustomUA
}
return base.UserAgent
}
func (d *CloudreveV4) request(method string, path string, callback base.ReqCallback, out any) error {
if d.ref != nil {
return d.ref.request(method, path, callback, out)
}
u := d.Address + "/api/v4" + path
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"Accept": "application/json, text/plain, */*",
"User-Agent": d.getUA(),
})
if d.AccessToken != "" {
req.SetHeader("Authorization", "Bearer "+d.AccessToken)
}
var r Resp
req.SetResult(&r)
if callback != nil {
callback(req)
}
resp, err := req.Execute(method, u)
if err != nil {
return err
}
if !resp.IsSuccess() {
return errors.New(resp.String())
}
if r.Code != 0 {
if r.Code == 401 && d.RefreshToken != "" && path != "/session/token/refresh" {
// try to refresh token
err = d.refreshToken()
if err != nil {
return err
}
return d.request(method, path, callback, out)
}
return errors.New(r.Msg)
}
if out != nil && r.Data != nil {
var marshal []byte
marshal, err = json.Marshal(r.Data)
if err != nil {
return err
}
err = json.Unmarshal(marshal, out)
if err != nil {
return err
}
}
return nil
}
func (d *CloudreveV4) login() error {
var siteConfig SiteLoginConfigResp
err := d.request(http.MethodGet, "/site/config/login", nil, &siteConfig)
if err != nil {
return err
}
if !siteConfig.Authn {
return errors.New("authn not support")
}
var prepareLogin PrepareLoginResp
err = d.request(http.MethodGet, "/session/prepare?email="+d.Addition.Username, nil, &prepareLogin)
if err != nil {
return err
}
if !prepareLogin.PasswordEnabled {
return errors.New("password not enabled")
}
if prepareLogin.WebauthnEnabled {
return errors.New("webauthn not support")
}
for range 5 {
err = d.doLogin(siteConfig.LoginCaptcha)
if err == nil {
break
}
if err.Error() != "CAPTCHA not match." {
break
}
}
return err
}
func (d *CloudreveV4) doLogin(needCaptcha bool) error {
var err error
loginBody := base.Json{
"email": d.Username,
"password": d.Password,
}
if needCaptcha {
var config BasicConfigResp
err = d.request(http.MethodGet, "/site/config/basic", nil, &config)
if err != nil {
return err
}
if config.CaptchaType != "normal" {
return fmt.Errorf("captcha type %s not support", config.CaptchaType)
}
var captcha CaptchaResp
err = d.request(http.MethodGet, "/site/captcha", nil, &captcha)
if err != nil {
return err
}
if !strings.HasPrefix(captcha.Image, "data:image/png;base64,") {
return errors.New("can not get captcha")
}
loginBody["ticket"] = captcha.Ticket
i := strings.Index(captcha.Image, ",")
dec := base64.NewDecoder(base64.StdEncoding, strings.NewReader(captcha.Image[i+1:]))
vRes, err := base.RestyClient.R().SetMultipartField(
"image", "validateCode.png", "image/png", dec).
Post(setting.GetStr(conf.OcrApi))
if err != nil {
return err
}
if jsoniter.Get(vRes.Body(), "status").ToInt() != 200 {
return errors.New("ocr error:" + jsoniter.Get(vRes.Body(), "msg").ToString())
}
captchaCode := jsoniter.Get(vRes.Body(), "result").ToString()
if captchaCode == "" {
return errors.New("ocr error: empty result")
}
loginBody["captcha"] = captchaCode
}
var token TokenResponse
err = d.request(http.MethodPost, "/session/token", func(req *resty.Request) {
req.SetBody(loginBody)
}, &token)
if err != nil {
return err
}
d.AccessToken, d.RefreshToken = token.Token.AccessToken, token.Token.RefreshToken
op.MustSaveDriverStorage(d)
return nil
}
func (d *CloudreveV4) refreshToken() error {
var token Token
if token.RefreshToken == "" {
if d.Username != "" {
err := d.login()
if err != nil {
return fmt.Errorf("cannot login to get refresh token, error: %s", err)
}
}
return nil
}
err := d.request(http.MethodPost, "/session/token/refresh", func(req *resty.Request) {
req.SetBody(base.Json{
"refresh_token": d.RefreshToken,
})
}, &token)
if err != nil {
return err
}
d.AccessToken, d.RefreshToken = token.AccessToken, token.RefreshToken
op.MustSaveDriverStorage(d)
return nil
}
func (d *CloudreveV4) upLocal(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
var finish int64 = 0
var chunk int = 0
DEFAULT := int64(u.ChunkSize)
if DEFAULT == 0 {
// support relay
DEFAULT = file.GetSize()
}
for finish < file.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := file.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-Local] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
err = d.request(http.MethodPost, "/file/upload/"+u.SessionID+"/"+strconv.Itoa(chunk), func(req *resty.Request) {
req.SetHeader("Content-Type", "application/octet-stream")
req.SetContentLength(true)
req.SetHeader("Content-Length", strconv.FormatInt(byteSize, 10))
req.SetBody(driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
req.AddRetryCondition(func(r *resty.Response, err error) bool {
if err != nil {
return true
}
if r.IsError() {
return true
}
var retryResp Resp
jErr := base.RestyClient.JSONUnmarshal(r.Body(), &retryResp)
if jErr != nil {
return true
}
if retryResp.Code != 0 {
return true
}
return false
})
}, nil)
if err != nil {
return err
}
finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize()))
chunk++
}
return nil
}
func (d *CloudreveV4) upRemote(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
uploadUrl := u.UploadUrls[0]
credential := u.Credential
var finish int64 = 0
var chunk int = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < file.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := file.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-Remote] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("POST", uploadUrl+"?chunk="+strconv.Itoa(chunk),
driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Authorization", fmt.Sprint(credential))
req.Header.Set("User-Agent", d.getUA())
err = func() error {
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != 200 {
return errors.New(res.Status)
}
body, err := io.ReadAll(res.Body)
if err != nil {
return err
}
var up Resp
err = json.Unmarshal(body, &up)
if err != nil {
return err
}
if up.Code != 0 {
return errors.New(up.Msg)
}
return nil
}()
if err == nil {
retryCount = 0
finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize()))
chunk++
} else {
retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors, error: %s", maxRetries, err)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Cloudreve-Remote] server errors while uploading, retrying after %v...", backoff)
time.Sleep(backoff)
}
}
return nil
}
func (d *CloudreveV4) upOneDrive(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
uploadUrl := u.UploadUrls[0]
var finish int64 = 0
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < file.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := file.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-OneDrive] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest(http.MethodPut, uploadUrl, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", finish, finish+byteSize-1, file.GetSize()))
req.Header.Set("User-Agent", d.getUA())
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
// https://learn.microsoft.com/zh-cn/onedrive/developer/rest-api/api/driveitem_createuploadsession
switch {
case res.StatusCode >= 500 && res.StatusCode <= 504:
retryCount++
if retryCount > maxRetries {
res.Body.Close()
return fmt.Errorf("upload failed after %d retries due to server errors, error %d", maxRetries, res.StatusCode)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[CloudreveV4-OneDrive] server errors %d while uploading, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200:
data, _ := io.ReadAll(res.Body)
res.Body.Close()
return errors.New(string(data))
default:
res.Body.Close()
retryCount = 0
finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize()))
}
}
// 上传成功发送回调请求
return d.request(http.MethodPost, "/callback/onedrive/"+u.SessionID+"/"+u.CallbackSecret, func(req *resty.Request) {
req.SetBody("{}")
}, nil)
}
func (d *CloudreveV4) upS3(ctx context.Context, file model.FileStreamer, u FileUploadResp, up driver.UpdateProgress) error {
var finish int64 = 0
var chunk int = 0
var etags []string
DEFAULT := int64(u.ChunkSize)
retryCount := 0
maxRetries := 3
for finish < file.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
left := file.GetSize() - finish
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[CloudreveV4-S3] upload range: %d-%d/%d", finish, finish+byteSize-1, file.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(file, byteData)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest(http.MethodPut, u.UploadUrls[chunk],
driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
if err != nil {
return err
}
req = req.WithContext(ctx)
req.ContentLength = byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
etag := res.Header.Get("ETag")
res.Body.Close()
switch {
case res.StatusCode != 200:
retryCount++
if retryCount > maxRetries {
return fmt.Errorf("upload failed after %d retries due to server errors", maxRetries)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("server error %d, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case etag == "":
return errors.New("faild to get ETag from header")
default:
retryCount = 0
etags = append(etags, etag)
finish += byteSize
up(float64(finish) * 100 / float64(file.GetSize()))
chunk++
}
}
// s3LikeFinishUpload
bodyBuilder := &strings.Builder{}
bodyBuilder.WriteString("<CompleteMultipartUpload>")
for i, etag := range etags {
bodyBuilder.WriteString(fmt.Sprintf(
`<Part><PartNumber>%d</PartNumber><ETag>%s</ETag></Part>`,
i+1, // PartNumber 从 1 开始
etag,
))
}
bodyBuilder.WriteString("</CompleteMultipartUpload>")
req, err := http.NewRequest(
"POST",
u.CompleteURL,
strings.NewReader(bodyBuilder.String()),
)
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/xml")
req.Header.Set("User-Agent", d.getUA())
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
body, _ := io.ReadAll(res.Body)
return fmt.Errorf("up status: %d, error: %s", res.StatusCode, string(body))
}
// 上传成功发送回调请求
return d.request(http.MethodPost, "/callback/s3/"+u.SessionID+"/"+u.CallbackSecret, func(req *resty.Request) {
req.SetBody("{}")
}, nil)
}

271
drivers/doubao/driver.go Normal file
View File

@ -0,0 +1,271 @@
package doubao
import (
"context"
"errors"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
)
type Doubao struct {
model.Storage
Addition
*UploadToken
UserId string
uploadThread int
}
func (d *Doubao) Config() driver.Config {
return config
}
func (d *Doubao) GetAddition() driver.Additional {
return &d.Addition
}
func (d *Doubao) Init(ctx context.Context) error {
// TODO login / refresh token
//op.MustSaveDriverStorage(d)
uploadThread, err := strconv.Atoi(d.UploadThread)
if err != nil || uploadThread < 1 {
d.uploadThread, d.UploadThread = 3, "3" // Set default value
} else {
d.uploadThread = uploadThread
}
if d.UserId == "" {
userInfo, err := d.getUserInfo()
if err != nil {
return err
}
d.UserId = strconv.FormatInt(userInfo.UserID, 10)
}
if d.UploadToken == nil {
uploadToken, err := d.initUploadToken()
if err != nil {
return err
}
d.UploadToken = uploadToken
}
return nil
}
func (d *Doubao) Drop(ctx context.Context) error {
return nil
}
func (d *Doubao) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
var files []model.Obj
fileList, err := d.getFiles(dir.GetID(), "")
if err != nil {
return nil, err
}
for _, child := range fileList {
files = append(files, &Object{
Object: model.Object{
ID: child.ID,
Path: child.ParentID,
Name: child.Name,
Size: child.Size,
Modified: time.Unix(child.UpdateTime, 0),
Ctime: time.Unix(child.CreateTime, 0),
IsFolder: child.NodeType == 1,
},
Key: child.Key,
NodeType: child.NodeType,
})
}
return files, nil
}
func (d *Doubao) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
var downloadUrl string
if u, ok := file.(*Object); ok {
switch d.DownloadApi {
case "get_download_info":
var r GetDownloadInfoResp
_, err := d.request("/samantha/aispace/get_download_info", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"requests": []base.Json{{"node_id": file.GetID()}},
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.DownloadInfos[0].MainURL
case "get_file_url":
switch u.NodeType {
case VideoType, AudioType:
var r GetVideoFileUrlResp
_, err := d.request("/samantha/media/get_play_info", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"key": u.Key,
"node_id": file.GetID(),
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.OriginalMediaInfo.MainURL
default:
var r GetFileUrlResp
_, err := d.request("/alice/message/get_file_url", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{u.Key},
"type": FileNodeType[u.NodeType],
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.FileUrls[0].MainURL
}
default:
return nil, errs.NotImplement
}
// 生成标准的Content-Disposition
contentDisposition := generateContentDisposition(u.Name)
return &model.Link{
URL: downloadUrl,
Header: http.Header{
"User-Agent": []string{UserAgent},
"Content-Disposition": []string{contentDisposition},
},
}, nil
}
return nil, errors.New("can't convert obj to URL")
}
func (d *Doubao) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
var r UploadNodeResp
_, err := d.request("/samantha/aispace/upload_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"node_list": []base.Json{
{
"local_id": uuid.New().String(),
"name": dirName,
"parent_id": parentDir.GetID(),
"node_type": 1,
},
},
})
}, &r)
return err
}
func (d *Doubao) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
var r UploadNodeResp
_, err := d.request("/samantha/aispace/move_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"node_list": []base.Json{
{"id": srcObj.GetID()},
},
"current_parent_id": srcObj.GetPath(),
"target_parent_id": dstDir.GetID(),
})
}, &r)
return err
}
func (d *Doubao) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
var r BaseResp
_, err := d.request("/samantha/aispace/rename_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"node_id": srcObj.GetID(),
"node_name": newName,
})
}, &r)
return err
}
func (d *Doubao) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// TODO copy obj, optional
return nil, errs.NotImplement
}
func (d *Doubao) Remove(ctx context.Context, obj model.Obj) error {
var r BaseResp
_, err := d.request("/samantha/aispace/delete_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{"node_list": []base.Json{{"id": obj.GetID()}}})
}, &r)
return err
}
func (d *Doubao) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
// 根据MIME类型确定数据类型
mimetype := file.GetMimetype()
dataType := FileDataType
switch {
case strings.HasPrefix(mimetype, "video/"):
dataType = VideoDataType
case strings.HasPrefix(mimetype, "audio/"):
dataType = VideoDataType // 音频与视频使用相同的处理方式
case strings.HasPrefix(mimetype, "image/"):
dataType = ImgDataType
}
// 获取上传配置
uploadConfig := UploadConfig{}
if err := d.getUploadConfig(&uploadConfig, dataType, file); err != nil {
return nil, err
}
// 根据文件大小选择上传方式
if file.GetSize() <= 1*utils.MB { // 小于1MB使用普通模式上传
return d.Upload(&uploadConfig, dstDir, file, up, dataType)
}
// 大文件使用分片上传
return d.UploadByMultipart(ctx, &uploadConfig, file.GetSize(), dstDir, file, up, dataType)
}
func (d *Doubao) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Doubao) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Doubao) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *Doubao) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
//func (d *Doubao) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*Doubao)(nil)

36
drivers/doubao/meta.go Normal file
View File

@ -0,0 +1,36 @@
package doubao
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
// Usually one of two
// driver.RootPath
driver.RootID
// define other
Cookie string `json:"cookie" type:"text"`
UploadThread string `json:"upload_thread" default:"3"`
DownloadApi string `json:"download_api" type:"select" options:"get_file_url,get_download_info" default:"get_file_url"`
}
var config = driver.Config{
Name: "Doubao",
LocalSort: true,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: false,
NeedMs: false,
DefaultRoot: "0",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &Doubao{}
})
}

415
drivers/doubao/types.go Normal file
View File

@ -0,0 +1,415 @@
package doubao
import (
"encoding/json"
"fmt"
"time"
"github.com/alist-org/alist/v3/internal/model"
)
type BaseResp struct {
Code int `json:"code"`
Msg string `json:"msg"`
}
type NodeInfoResp struct {
BaseResp
Data struct {
NodeInfo File `json:"node_info"`
Children []File `json:"children"`
NextCursor string `json:"next_cursor"`
HasMore bool `json:"has_more"`
} `json:"data"`
}
type File struct {
ID string `json:"id"`
Name string `json:"name"`
Key string `json:"key"`
NodeType int `json:"node_type"` // 0: 文件, 1: 文件夹
Size int64 `json:"size"`
Source int `json:"source"`
NameReviewStatus int `json:"name_review_status"`
ContentReviewStatus int `json:"content_review_status"`
RiskReviewStatus int `json:"risk_review_status"`
ConversationID string `json:"conversation_id"`
ParentID string `json:"parent_id"`
CreateTime int64 `json:"create_time"`
UpdateTime int64 `json:"update_time"`
}
type GetDownloadInfoResp struct {
BaseResp
Data struct {
DownloadInfos []struct {
NodeID string `json:"node_id"`
MainURL string `json:"main_url"`
BackupURL string `json:"backup_url"`
} `json:"download_infos"`
} `json:"data"`
}
type GetFileUrlResp struct {
BaseResp
Data struct {
FileUrls []struct {
URI string `json:"uri"`
MainURL string `json:"main_url"`
BackURL string `json:"back_url"`
} `json:"file_urls"`
} `json:"data"`
}
type GetVideoFileUrlResp struct {
BaseResp
Data struct {
MediaType string `json:"media_type"`
MediaInfo []struct {
Meta struct {
Height string `json:"height"`
Width string `json:"width"`
Format string `json:"format"`
Duration float64 `json:"duration"`
CodecType string `json:"codec_type"`
Definition string `json:"definition"`
} `json:"meta"`
MainURL string `json:"main_url"`
BackupURL string `json:"backup_url"`
} `json:"media_info"`
OriginalMediaInfo struct {
Meta struct {
Height string `json:"height"`
Width string `json:"width"`
Format string `json:"format"`
Duration float64 `json:"duration"`
CodecType string `json:"codec_type"`
Definition string `json:"definition"`
} `json:"meta"`
MainURL string `json:"main_url"`
BackupURL string `json:"backup_url"`
} `json:"original_media_info"`
PosterURL string `json:"poster_url"`
PlayableStatus int `json:"playable_status"`
} `json:"data"`
}
type UploadNodeResp struct {
BaseResp
Data struct {
NodeList []struct {
LocalID string `json:"local_id"`
ID string `json:"id"`
ParentID string `json:"parent_id"`
Name string `json:"name"`
Key string `json:"key"`
NodeType int `json:"node_type"` // 0: 文件, 1: 文件夹
} `json:"node_list"`
} `json:"data"`
}
type Object struct {
model.Object
Key string
NodeType int
}
type UserInfoResp struct {
Data UserInfo `json:"data"`
Message string `json:"message"`
}
type AppUserInfo struct {
BuiAuditInfo string `json:"bui_audit_info"`
}
type AuditInfo struct {
}
type Details struct {
}
type BuiAuditInfo struct {
AuditInfo AuditInfo `json:"audit_info"`
IsAuditing bool `json:"is_auditing"`
AuditStatus int `json:"audit_status"`
LastUpdateTime int `json:"last_update_time"`
UnpassReason string `json:"unpass_reason"`
Details Details `json:"details"`
}
type Connects struct {
Platform string `json:"platform"`
ProfileImageURL string `json:"profile_image_url"`
ExpiredTime int `json:"expired_time"`
ExpiresIn int `json:"expires_in"`
PlatformScreenName string `json:"platform_screen_name"`
UserID int64 `json:"user_id"`
PlatformUID string `json:"platform_uid"`
SecPlatformUID string `json:"sec_platform_uid"`
PlatformAppID int `json:"platform_app_id"`
ModifyTime int `json:"modify_time"`
AccessToken string `json:"access_token"`
OpenID string `json:"open_id"`
}
type OperStaffRelationInfo struct {
HasPassword int `json:"has_password"`
Mobile string `json:"mobile"`
SecOperStaffUserID string `json:"sec_oper_staff_user_id"`
RelationMobileCountryCode int `json:"relation_mobile_country_code"`
}
type UserInfo struct {
AppID int `json:"app_id"`
AppUserInfo AppUserInfo `json:"app_user_info"`
AvatarURL string `json:"avatar_url"`
BgImgURL string `json:"bg_img_url"`
BuiAuditInfo BuiAuditInfo `json:"bui_audit_info"`
CanBeFoundByPhone int `json:"can_be_found_by_phone"`
Connects []Connects `json:"connects"`
CountryCode int `json:"country_code"`
Description string `json:"description"`
DeviceID int `json:"device_id"`
Email string `json:"email"`
EmailCollected bool `json:"email_collected"`
Gender int `json:"gender"`
HasPassword int `json:"has_password"`
HmRegion int `json:"hm_region"`
IsBlocked int `json:"is_blocked"`
IsBlocking int `json:"is_blocking"`
IsRecommendAllowed int `json:"is_recommend_allowed"`
IsVisitorAccount bool `json:"is_visitor_account"`
Mobile string `json:"mobile"`
Name string `json:"name"`
NeedCheckBindStatus bool `json:"need_check_bind_status"`
OdinUserType int `json:"odin_user_type"`
OperStaffRelationInfo OperStaffRelationInfo `json:"oper_staff_relation_info"`
PhoneCollected bool `json:"phone_collected"`
RecommendHintMessage string `json:"recommend_hint_message"`
ScreenName string `json:"screen_name"`
SecUserID string `json:"sec_user_id"`
SessionKey string `json:"session_key"`
UseHmRegion bool `json:"use_hm_region"`
UserCreateTime int `json:"user_create_time"`
UserID int64 `json:"user_id"`
UserIDStr string `json:"user_id_str"`
UserVerified bool `json:"user_verified"`
VerifiedContent string `json:"verified_content"`
}
// UploadToken 上传令牌配置
type UploadToken struct {
Alice map[string]UploadAuthToken
Samantha MediaUploadAuthToken
}
// UploadAuthToken 多种类型的上传配置:图片/文件
type UploadAuthToken struct {
ServiceID string `json:"service_id"`
UploadPathPrefix string `json:"upload_path_prefix"`
Auth struct {
AccessKeyID string `json:"access_key_id"`
SecretAccessKey string `json:"secret_access_key"`
SessionToken string `json:"session_token"`
ExpiredTime time.Time `json:"expired_time"`
CurrentTime time.Time `json:"current_time"`
} `json:"auth"`
UploadHost string `json:"upload_host"`
}
// MediaUploadAuthToken 媒体上传配置
type MediaUploadAuthToken struct {
StsToken struct {
AccessKeyID string `json:"access_key_id"`
SecretAccessKey string `json:"secret_access_key"`
SessionToken string `json:"session_token"`
ExpiredTime time.Time `json:"expired_time"`
CurrentTime time.Time `json:"current_time"`
} `json:"sts_token"`
UploadInfo struct {
VideoHost string `json:"video_host"`
SpaceName string `json:"space_name"`
} `json:"upload_info"`
}
type UploadAuthTokenResp struct {
BaseResp
Data UploadAuthToken `json:"data"`
}
type MediaUploadAuthTokenResp struct {
BaseResp
Data MediaUploadAuthToken `json:"data"`
}
type ResponseMetadata struct {
RequestID string `json:"RequestId"`
Action string `json:"Action"`
Version string `json:"Version"`
Service string `json:"Service"`
Region string `json:"Region"`
Error struct {
CodeN int `json:"CodeN,omitempty"`
Code string `json:"Code,omitempty"`
Message string `json:"Message,omitempty"`
} `json:"Error,omitempty"`
}
type UploadConfig struct {
UploadAddress UploadAddress `json:"UploadAddress"`
FallbackUploadAddress FallbackUploadAddress `json:"FallbackUploadAddress"`
InnerUploadAddress InnerUploadAddress `json:"InnerUploadAddress"`
RequestID string `json:"RequestId"`
SDKParam interface{} `json:"SDKParam"`
}
type UploadConfigResp struct {
ResponseMetadata `json:"ResponseMetadata"`
Result UploadConfig `json:"Result"`
}
// StoreInfo 存储信息
type StoreInfo struct {
StoreURI string `json:"StoreUri"`
Auth string `json:"Auth"`
UploadID string `json:"UploadID"`
UploadHeader map[string]interface{} `json:"UploadHeader,omitempty"`
StorageHeader map[string]interface{} `json:"StorageHeader,omitempty"`
}
// UploadAddress 上传地址信息
type UploadAddress struct {
StoreInfos []StoreInfo `json:"StoreInfos"`
UploadHosts []string `json:"UploadHosts"`
UploadHeader map[string]interface{} `json:"UploadHeader"`
SessionKey string `json:"SessionKey"`
Cloud string `json:"Cloud"`
}
// FallbackUploadAddress 备用上传地址
type FallbackUploadAddress struct {
StoreInfos []StoreInfo `json:"StoreInfos"`
UploadHosts []string `json:"UploadHosts"`
UploadHeader map[string]interface{} `json:"UploadHeader"`
SessionKey string `json:"SessionKey"`
Cloud string `json:"Cloud"`
}
// UploadNode 上传节点信息
type UploadNode struct {
Vid string `json:"Vid"`
Vids []string `json:"Vids"`
StoreInfos []StoreInfo `json:"StoreInfos"`
UploadHost string `json:"UploadHost"`
UploadHeader map[string]interface{} `json:"UploadHeader"`
Type string `json:"Type"`
Protocol string `json:"Protocol"`
SessionKey string `json:"SessionKey"`
NodeConfig struct {
UploadMode string `json:"UploadMode"`
} `json:"NodeConfig"`
Cluster string `json:"Cluster"`
}
// AdvanceOption 高级选项
type AdvanceOption struct {
Parallel int `json:"Parallel"`
Stream int `json:"Stream"`
SliceSize int `json:"SliceSize"`
EncryptionKey string `json:"EncryptionKey"`
}
// InnerUploadAddress 内部上传地址
type InnerUploadAddress struct {
UploadNodes []UploadNode `json:"UploadNodes"`
AdvanceOption AdvanceOption `json:"AdvanceOption"`
}
// UploadPart 上传分片信息
type UploadPart struct {
UploadId string `json:"uploadid,omitempty"`
PartNumber string `json:"part_number,omitempty"`
Crc32 string `json:"crc32,omitempty"`
Etag string `json:"etag,omitempty"`
Mode string `json:"mode,omitempty"`
}
// UploadResp 上传响应体
type UploadResp struct {
Code int `json:"code"`
ApiVersion string `json:"apiversion"`
Message string `json:"message"`
Data UploadPart `json:"data"`
}
type VideoCommitUpload struct {
Vid string `json:"Vid"`
VideoMeta struct {
URI string `json:"Uri"`
Height int `json:"Height"`
Width int `json:"Width"`
OriginHeight int `json:"OriginHeight"`
OriginWidth int `json:"OriginWidth"`
Duration float64 `json:"Duration"`
Bitrate int `json:"Bitrate"`
Md5 string `json:"Md5"`
Format string `json:"Format"`
Size int `json:"Size"`
FileType string `json:"FileType"`
Codec string `json:"Codec"`
} `json:"VideoMeta"`
WorkflowInput struct {
TemplateID string `json:"TemplateId"`
} `json:"WorkflowInput"`
GetPosterMode string `json:"GetPosterMode"`
}
type VideoCommitUploadResp struct {
ResponseMetadata ResponseMetadata `json:"ResponseMetadata"`
Result struct {
RequestID string `json:"RequestId"`
Results []VideoCommitUpload `json:"Results"`
} `json:"Result"`
}
type CommonResp struct {
Code int `json:"code"`
Msg string `json:"msg,omitempty"`
Message string `json:"message,omitempty"` // 错误情况下的消息
Data json.RawMessage `json:"data,omitempty"` // 原始数据,稍后解析
Error *struct {
Code int `json:"code"`
Message string `json:"message"`
Locale string `json:"locale"`
} `json:"error,omitempty"`
}
// IsSuccess 判断响应是否成功
func (r *CommonResp) IsSuccess() bool {
return r.Code == 0
}
// GetError 获取错误信息
func (r *CommonResp) GetError() error {
if r.IsSuccess() {
return nil
}
// 优先使用message字段
errMsg := r.Message
if errMsg == "" {
errMsg = r.Msg
}
// 如果error对象存在且有详细消息,则使用error中的信息
if r.Error != nil && r.Error.Message != "" {
errMsg = r.Error.Message
}
return fmt.Errorf("[doubao] API error (code: %d): %s", r.Code, errMsg)
}
// UnmarshalData 将data字段解析为指定类型
func (r *CommonResp) UnmarshalData(v interface{}) error {
if !r.IsSuccess() {
return r.GetError()
}
if len(r.Data) == 0 {
return nil
}
return json.Unmarshal(r.Data, v)
}

970
drivers/doubao/util.go Normal file
View File

@ -0,0 +1,970 @@
package doubao
import (
"context"
"crypto/hmac"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/pkg/errgroup"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/avast/retry-go"
"github.com/go-resty/resty/v2"
"github.com/google/uuid"
log "github.com/sirupsen/logrus"
"hash/crc32"
"io"
"math"
"math/rand"
"net/http"
"net/url"
"path/filepath"
"sort"
"strconv"
"strings"
"sync"
"time"
)
const (
DirectoryType = 1
FileType = 2
LinkType = 3
ImageType = 4
PagesType = 5
VideoType = 6
AudioType = 7
MeetingMinutesType = 8
)
var FileNodeType = map[int]string{
1: "directory",
2: "file",
3: "link",
4: "image",
5: "pages",
6: "video",
7: "audio",
8: "meeting_minutes",
}
const (
BaseURL = "https://www.doubao.com"
FileDataType = "file"
ImgDataType = "image"
VideoDataType = "video"
DefaultChunkSize = int64(5 * 1024 * 1024) // 5MB
MaxRetryAttempts = 3 // 最大重试次数
UserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36"
Region = "cn-north-1"
UploadTimeout = 3 * time.Minute
)
// do others that not defined in Driver interface
func (d *Doubao) request(path string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
reqUrl := BaseURL + path
req := base.RestyClient.R()
req.SetHeader("Cookie", d.Cookie)
if callback != nil {
callback(req)
}
var commonResp CommonResp
res, err := req.Execute(method, reqUrl)
log.Debugln(res.String())
if err != nil {
return nil, err
}
body := res.Body()
// 先解析为通用响应
if err = json.Unmarshal(body, &commonResp); err != nil {
return nil, err
}
// 检查响应是否成功
if !commonResp.IsSuccess() {
return body, commonResp.GetError()
}
if resp != nil {
if err = json.Unmarshal(body, resp); err != nil {
return body, err
}
}
return body, nil
}
func (d *Doubao) getFiles(dirId, cursor string) (resp []File, err error) {
var r NodeInfoResp
var body = base.Json{
"node_id": dirId,
}
// 如果有游标,则设置游标和大小
if cursor != "" {
body["cursor"] = cursor
body["size"] = 50
} else {
body["need_full_path"] = false
}
_, err = d.request("/samantha/aispace/node_info", http.MethodPost, func(req *resty.Request) {
req.SetBody(body)
}, &r)
if err != nil {
return nil, err
}
if r.Data.Children != nil {
resp = r.Data.Children
}
if r.Data.NextCursor != "-1" {
// 递归获取下一页
nextFiles, err := d.getFiles(dirId, r.Data.NextCursor)
if err != nil {
return nil, err
}
resp = append(r.Data.Children, nextFiles...)
}
return resp, err
}
func (d *Doubao) getUserInfo() (UserInfo, error) {
var r UserInfoResp
_, err := d.request("/passport/account/info/v2/", http.MethodGet, nil, &r)
if err != nil {
return UserInfo{}, err
}
return r.Data, err
}
// 签名请求
func (d *Doubao) signRequest(req *resty.Request, method, tokenType, uploadUrl string) error {
parsedUrl, err := url.Parse(uploadUrl)
if err != nil {
return fmt.Errorf("invalid URL format: %w", err)
}
var accessKeyId, secretAccessKey, sessionToken string
var serviceName string
if tokenType == VideoDataType {
accessKeyId = d.UploadToken.Samantha.StsToken.AccessKeyID
secretAccessKey = d.UploadToken.Samantha.StsToken.SecretAccessKey
sessionToken = d.UploadToken.Samantha.StsToken.SessionToken
serviceName = "vod"
} else {
accessKeyId = d.UploadToken.Alice[tokenType].Auth.AccessKeyID
secretAccessKey = d.UploadToken.Alice[tokenType].Auth.SecretAccessKey
sessionToken = d.UploadToken.Alice[tokenType].Auth.SessionToken
serviceName = "imagex"
}
// 当前时间,格式为 ISO8601
now := time.Now().UTC()
amzDate := now.Format("20060102T150405Z")
dateStamp := now.Format("20060102")
req.SetHeader("X-Amz-Date", amzDate)
if sessionToken != "" {
req.SetHeader("X-Amz-Security-Token", sessionToken)
}
// 计算请求体的SHA256哈希
var bodyHash string
if req.Body != nil {
bodyBytes, ok := req.Body.([]byte)
if !ok {
return fmt.Errorf("request body must be []byte")
}
bodyHash = hashSHA256(string(bodyBytes))
req.SetHeader("X-Amz-Content-Sha256", bodyHash)
} else {
bodyHash = hashSHA256("")
}
// 创建规范请求
canonicalURI := parsedUrl.Path
if canonicalURI == "" {
canonicalURI = "/"
}
// 查询参数按照字母顺序排序
canonicalQueryString := getCanonicalQueryString(req.QueryParam)
// 规范请求头
canonicalHeaders, signedHeaders := getCanonicalHeadersFromMap(req.Header)
canonicalRequest := method + "\n" +
canonicalURI + "\n" +
canonicalQueryString + "\n" +
canonicalHeaders + "\n" +
signedHeaders + "\n" +
bodyHash
algorithm := "AWS4-HMAC-SHA256"
credentialScope := fmt.Sprintf("%s/%s/%s/aws4_request", dateStamp, Region, serviceName)
stringToSign := algorithm + "\n" +
amzDate + "\n" +
credentialScope + "\n" +
hashSHA256(canonicalRequest)
// 计算签名密钥
signingKey := getSigningKey(secretAccessKey, dateStamp, Region, serviceName)
// 计算签名
signature := hmacSHA256Hex(signingKey, stringToSign)
// 构建授权头
authorizationHeader := fmt.Sprintf(
"%s Credential=%s/%s, SignedHeaders=%s, Signature=%s",
algorithm,
accessKeyId,
credentialScope,
signedHeaders,
signature,
)
req.SetHeader("Authorization", authorizationHeader)
return nil
}
func (d *Doubao) requestApi(url, method, tokenType string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"user-agent": UserAgent,
})
if method == http.MethodPost {
req.SetHeader("Content-Type", "text/plain;charset=UTF-8")
}
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
// 使用自定义AWS SigV4签名
err := d.signRequest(req, method, tokenType, url)
if err != nil {
return nil, err
}
res, err := req.Execute(method, url)
if err != nil {
return nil, err
}
return res.Body(), nil
}
func (d *Doubao) initUploadToken() (*UploadToken, error) {
uploadToken := &UploadToken{
Alice: make(map[string]UploadAuthToken),
Samantha: MediaUploadAuthToken{},
}
fileAuthToken, err := d.getUploadAuthToken(FileDataType)
if err != nil {
return nil, err
}
imgAuthToken, err := d.getUploadAuthToken(ImgDataType)
if err != nil {
return nil, err
}
mediaAuthToken, err := d.getSamantaUploadAuthToken()
if err != nil {
return nil, err
}
uploadToken.Alice[FileDataType] = fileAuthToken
uploadToken.Alice[ImgDataType] = imgAuthToken
uploadToken.Samantha = mediaAuthToken
return uploadToken, nil
}
func (d *Doubao) getUploadAuthToken(dataType string) (ut UploadAuthToken, err error) {
var r UploadAuthTokenResp
_, err = d.request("/alice/upload/auth_token", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"scene": "bot_chat",
"data_type": dataType,
})
}, &r)
return r.Data, err
}
func (d *Doubao) getSamantaUploadAuthToken() (mt MediaUploadAuthToken, err error) {
var r MediaUploadAuthTokenResp
_, err = d.request("/samantha/media/get_upload_token", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{})
}, &r)
return r.Data, err
}
// getUploadConfig 获取上传配置信息
func (d *Doubao) getUploadConfig(upConfig *UploadConfig, dataType string, file model.FileStreamer) error {
tokenType := dataType
// 配置参数函数
configureParams := func() (string, map[string]string) {
var uploadUrl string
var params map[string]string
// 根据数据类型设置不同的上传参数
switch dataType {
case VideoDataType:
// 音频/视频类型 - 使用uploadToken.Samantha的配置
uploadUrl = d.UploadToken.Samantha.UploadInfo.VideoHost
params = map[string]string{
"Action": "ApplyUploadInner",
"Version": "2020-11-19",
"SpaceName": d.UploadToken.Samantha.UploadInfo.SpaceName,
"FileType": "video",
"IsInner": "1",
"NeedFallback": "true",
"FileSize": strconv.FormatInt(file.GetSize(), 10),
"s": randomString(),
}
case ImgDataType, FileDataType:
// 图片或其他文件类型 - 使用uploadToken.Alice对应配置
uploadUrl = "https://" + d.UploadToken.Alice[dataType].UploadHost
params = map[string]string{
"Action": "ApplyImageUpload",
"Version": "2018-08-01",
"ServiceId": d.UploadToken.Alice[dataType].ServiceID,
"NeedFallback": "true",
"FileSize": strconv.FormatInt(file.GetSize(), 10),
"FileExtension": filepath.Ext(file.GetName()),
"s": randomString(),
}
}
return uploadUrl, params
}
// 获取初始参数
uploadUrl, params := configureParams()
tokenRefreshed := false
var configResp UploadConfigResp
err := d._retryOperation("get upload_config", func() error {
configResp = UploadConfigResp{}
_, err := d.requestApi(uploadUrl, http.MethodGet, tokenType, func(req *resty.Request) {
req.SetQueryParams(params)
}, &configResp)
if err != nil {
return err
}
if configResp.ResponseMetadata.Error.Code == "" {
*upConfig = configResp.Result
return nil
}
// 100028 凭证过期
if configResp.ResponseMetadata.Error.CodeN == 100028 && !tokenRefreshed {
log.Debugln("[doubao] Upload token expired, re-fetching...")
newToken, err := d.initUploadToken()
if err != nil {
return fmt.Errorf("failed to refresh token: %w", err)
}
d.UploadToken = newToken
tokenRefreshed = true
uploadUrl, params = configureParams()
return retry.Error{errors.New("token refreshed, retry needed")}
}
return fmt.Errorf("get upload_config failed: %s", configResp.ResponseMetadata.Error.Message)
})
return err
}
// uploadNode 上传 文件信息
func (d *Doubao) uploadNode(uploadConfig *UploadConfig, dir model.Obj, file model.FileStreamer, dataType string) (UploadNodeResp, error) {
reqUuid := uuid.New().String()
var key string
var nodeType int
mimetype := file.GetMimetype()
switch dataType {
case VideoDataType:
key = uploadConfig.InnerUploadAddress.UploadNodes[0].Vid
if strings.HasPrefix(mimetype, "audio/") {
nodeType = AudioType // 音频类型
} else {
nodeType = VideoType // 视频类型
}
case ImgDataType:
key = uploadConfig.InnerUploadAddress.UploadNodes[0].StoreInfos[0].StoreURI
nodeType = ImageType // 图片类型
default: // FileDataType
key = uploadConfig.InnerUploadAddress.UploadNodes[0].StoreInfos[0].StoreURI
nodeType = FileType // 文件类型
}
var r UploadNodeResp
_, err := d.request("/samantha/aispace/upload_node", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"node_list": []base.Json{
{
"local_id": reqUuid,
"parent_id": dir.GetID(),
"name": file.GetName(),
"key": key,
"node_content": base.Json{},
"node_type": nodeType,
"size": file.GetSize(),
},
},
"request_id": reqUuid,
})
}, &r)
return r, err
}
// Upload 普通上传实现
func (d *Doubao) Upload(config *UploadConfig, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, dataType string) (model.Obj, error) {
data, err := io.ReadAll(file)
if err != nil {
return nil, err
}
// 计算CRC32
crc32Hash := crc32.NewIEEE()
crc32Hash.Write(data)
crc32Value := hex.EncodeToString(crc32Hash.Sum(nil))
// 构建请求路径
uploadNode := config.InnerUploadAddress.UploadNodes[0]
storeInfo := uploadNode.StoreInfos[0]
uploadUrl := fmt.Sprintf("https://%s/upload/v1/%s", uploadNode.UploadHost, storeInfo.StoreURI)
uploadResp := UploadResp{}
if _, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
req.SetHeaders(map[string]string{
"Content-Type": "application/octet-stream",
"Content-Crc32": crc32Value,
"Content-Length": fmt.Sprintf("%d", len(data)),
"Content-Disposition": fmt.Sprintf("attachment; filename=%s", url.QueryEscape(storeInfo.StoreURI)),
})
req.SetBody(data)
}, &uploadResp); err != nil {
return nil, err
}
if uploadResp.Code != 2000 {
return nil, fmt.Errorf("upload failed: %s", uploadResp.Message)
}
uploadNodeResp, err := d.uploadNode(config, dstDir, file, dataType)
if err != nil {
return nil, err
}
return &model.Object{
ID: uploadNodeResp.Data.NodeList[0].ID,
Name: uploadNodeResp.Data.NodeList[0].Name,
Size: file.GetSize(),
IsFolder: false,
}, nil
}
// UploadByMultipart 分片上传
func (d *Doubao) UploadByMultipart(ctx context.Context, config *UploadConfig, fileSize int64, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress, dataType string) (model.Obj, error) {
// 构建请求路径
uploadNode := config.InnerUploadAddress.UploadNodes[0]
storeInfo := uploadNode.StoreInfos[0]
uploadUrl := fmt.Sprintf("https://%s/upload/v1/%s", uploadNode.UploadHost, storeInfo.StoreURI)
// 初始化分片上传
var uploadID string
err := d._retryOperation("Initialize multipart upload", func() error {
var err error
uploadID, err = d.initMultipartUpload(config, uploadUrl, storeInfo)
return err
})
if err != nil {
return nil, fmt.Errorf("failed to initialize multipart upload: %w", err)
}
// 准备分片参数
chunkSize := DefaultChunkSize
if config.InnerUploadAddress.AdvanceOption.SliceSize > 0 {
chunkSize = int64(config.InnerUploadAddress.AdvanceOption.SliceSize)
}
totalParts := (fileSize + chunkSize - 1) / chunkSize
// 创建分片信息组
parts := make([]UploadPart, totalParts)
// 缓存文件
tempFile, err := file.CacheFullInTempFile()
if err != nil {
return nil, fmt.Errorf("failed to cache file: %w", err)
}
defer tempFile.Close()
up(10.0) // 更新进度
// 设置并行上传
threadG, uploadCtx := errgroup.NewGroupWithContext(ctx, d.uploadThread,
retry.Attempts(1),
retry.Delay(time.Second),
retry.DelayType(retry.BackOffDelay))
var partsMutex sync.Mutex
// 并行上传所有分片
for partIndex := int64(0); partIndex < totalParts; partIndex++ {
if utils.IsCanceled(uploadCtx) {
break
}
partIndex := partIndex
partNumber := partIndex + 1 // 分片编号从1开始
threadG.Go(func(ctx context.Context) error {
// 计算此分片的大小和偏移
offset := partIndex * chunkSize
size := chunkSize
if partIndex == totalParts-1 {
size = fileSize - offset
}
limitedReader := driver.NewLimitedUploadStream(ctx, io.NewSectionReader(tempFile, offset, size))
// 读取数据到内存
data, err := io.ReadAll(limitedReader)
if err != nil {
return fmt.Errorf("failed to read part %d: %w", partNumber, err)
}
// 计算CRC32
crc32Value := calculateCRC32(data)
// 使用_retryOperation上传分片
var uploadPart UploadPart
if err = d._retryOperation(fmt.Sprintf("Upload part %d", partNumber), func() error {
var err error
uploadPart, err = d.uploadPart(config, uploadUrl, uploadID, partNumber, data, crc32Value)
return err
}); err != nil {
return fmt.Errorf("part %d upload failed: %w", partNumber, err)
}
// 记录成功上传的分片
partsMutex.Lock()
parts[partIndex] = UploadPart{
PartNumber: strconv.FormatInt(partNumber, 10),
Etag: uploadPart.Etag,
Crc32: crc32Value,
}
partsMutex.Unlock()
// 更新进度
progress := 10.0 + 90.0*float64(threadG.Success()+1)/float64(totalParts)
up(math.Min(progress, 95.0))
return nil
})
}
if err = threadG.Wait(); err != nil {
return nil, err
}
// 完成上传-分片合并
if err = d._retryOperation("Complete multipart upload", func() error {
return d.completeMultipartUpload(config, uploadUrl, uploadID, parts)
}); err != nil {
return nil, fmt.Errorf("failed to complete multipart upload: %w", err)
}
// 提交上传
if err = d._retryOperation("Commit upload", func() error {
return d.commitMultipartUpload(config)
}); err != nil {
return nil, fmt.Errorf("failed to commit upload: %w", err)
}
up(98.0) // 更新到98%
// 上传节点信息
var uploadNodeResp UploadNodeResp
if err = d._retryOperation("Upload node", func() error {
var err error
uploadNodeResp, err = d.uploadNode(config, dstDir, file, dataType)
return err
}); err != nil {
return nil, fmt.Errorf("failed to upload node: %w", err)
}
up(100.0) // 完成上传
return &model.Object{
ID: uploadNodeResp.Data.NodeList[0].ID,
Name: uploadNodeResp.Data.NodeList[0].Name,
Size: file.GetSize(),
IsFolder: false,
}, nil
}
// 统一上传请求方法
func (d *Doubao) uploadRequest(uploadUrl string, method string, storeInfo StoreInfo, callback base.ReqCallback, resp interface{}) ([]byte, error) {
client := resty.New()
client.SetTransport(&http.Transport{
DisableKeepAlives: true, // 禁用连接复用
ForceAttemptHTTP2: false, // 强制使用HTTP/1.1
})
client.SetTimeout(UploadTimeout)
req := client.R()
req.SetHeaders(map[string]string{
"Host": strings.Split(uploadUrl, "/")[2],
"Referer": BaseURL + "/",
"Origin": BaseURL,
"User-Agent": UserAgent,
"X-Storage-U": d.UserId,
"Authorization": storeInfo.Auth,
})
if method == http.MethodPost {
req.SetHeader("Content-Type", "text/plain;charset=UTF-8")
}
if callback != nil {
callback(req)
}
if resp != nil {
req.SetResult(resp)
}
res, err := req.Execute(method, uploadUrl)
if err != nil && err != io.EOF {
return nil, fmt.Errorf("upload request failed: %w", err)
}
return res.Body(), nil
}
// 初始化分片上传
func (d *Doubao) initMultipartUpload(config *UploadConfig, uploadUrl string, storeInfo StoreInfo) (uploadId string, err error) {
uploadResp := UploadResp{}
_, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"uploadmode": "part",
"phase": "init",
})
}, &uploadResp)
if err != nil {
return uploadId, err
}
if uploadResp.Code != 2000 {
return uploadId, fmt.Errorf("init upload failed: %s", uploadResp.Message)
}
return uploadResp.Data.UploadId, nil
}
// 分片上传实现
func (d *Doubao) uploadPart(config *UploadConfig, uploadUrl, uploadID string, partNumber int64, data []byte, crc32Value string) (resp UploadPart, err error) {
uploadResp := UploadResp{}
storeInfo := config.InnerUploadAddress.UploadNodes[0].StoreInfos[0]
_, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
req.SetHeaders(map[string]string{
"Content-Type": "application/octet-stream",
"Content-Crc32": crc32Value,
"Content-Length": fmt.Sprintf("%d", len(data)),
"Content-Disposition": fmt.Sprintf("attachment; filename=%s", url.QueryEscape(storeInfo.StoreURI)),
})
req.SetQueryParams(map[string]string{
"uploadid": uploadID,
"part_number": strconv.FormatInt(partNumber, 10),
"phase": "transfer",
})
req.SetBody(data)
req.SetContentLength(true)
}, &uploadResp)
if err != nil {
return resp, err
}
if uploadResp.Code != 2000 {
return resp, fmt.Errorf("upload part failed: %s", uploadResp.Message)
} else if uploadResp.Data.Crc32 != crc32Value {
return resp, fmt.Errorf("upload part failed: crc32 mismatch, expected %s, got %s", crc32Value, uploadResp.Data.Crc32)
}
return uploadResp.Data, nil
}
// 完成分片上传
func (d *Doubao) completeMultipartUpload(config *UploadConfig, uploadUrl, uploadID string, parts []UploadPart) error {
uploadResp := UploadResp{}
storeInfo := config.InnerUploadAddress.UploadNodes[0].StoreInfos[0]
body := _convertUploadParts(parts)
err := utils.Retry(MaxRetryAttempts, time.Second, func() (err error) {
_, err = d.uploadRequest(uploadUrl, http.MethodPost, storeInfo, func(req *resty.Request) {
req.SetQueryParams(map[string]string{
"uploadid": uploadID,
"phase": "finish",
"uploadmode": "part",
})
req.SetBody(body)
}, &uploadResp)
if err != nil {
return err
}
// 检查响应状态码 2000 成功 4024 分片合并中
if uploadResp.Code != 2000 && uploadResp.Code != 4024 {
return fmt.Errorf("finish upload failed: %s", uploadResp.Message)
}
return err
})
if err != nil {
return fmt.Errorf("failed to complete multipart upload: %w", err)
}
return nil
}
func (d *Doubao) commitMultipartUpload(uploadConfig *UploadConfig) error {
uploadUrl := d.UploadToken.Samantha.UploadInfo.VideoHost
params := map[string]string{
"Action": "CommitUploadInner",
"Version": "2020-11-19",
"SpaceName": d.UploadToken.Samantha.UploadInfo.SpaceName,
}
tokenType := VideoDataType
videoCommitUploadResp := VideoCommitUploadResp{}
jsonBytes, err := json.Marshal(base.Json{
"SessionKey": uploadConfig.InnerUploadAddress.UploadNodes[0].SessionKey,
"Functions": []base.Json{},
})
if err != nil {
return fmt.Errorf("failed to marshal request data: %w", err)
}
_, err = d.requestApi(uploadUrl, http.MethodPost, tokenType, func(req *resty.Request) {
req.SetHeader("Content-Type", "application/json")
req.SetQueryParams(params)
req.SetBody(jsonBytes)
}, &videoCommitUploadResp)
if err != nil {
return err
}
return nil
}
// 计算CRC32
func calculateCRC32(data []byte) string {
hash := crc32.NewIEEE()
hash.Write(data)
return hex.EncodeToString(hash.Sum(nil))
}
// _retryOperation 操作重试
func (d *Doubao) _retryOperation(operation string, fn func() error) error {
return retry.Do(
fn,
retry.Attempts(MaxRetryAttempts),
retry.Delay(500*time.Millisecond),
retry.DelayType(retry.BackOffDelay),
retry.MaxJitter(200*time.Millisecond),
retry.OnRetry(func(n uint, err error) {
log.Debugf("[doubao] %s retry #%d: %v", operation, n+1, err)
}),
)
}
// _convertUploadParts 将分片信息转换为字符串
func _convertUploadParts(parts []UploadPart) string {
if len(parts) == 0 {
return ""
}
var result strings.Builder
for i, part := range parts {
if i > 0 {
result.WriteString(",")
}
result.WriteString(fmt.Sprintf("%s:%s", part.PartNumber, part.Crc32))
}
return result.String()
}
// 获取规范查询字符串
func getCanonicalQueryString(query url.Values) string {
if len(query) == 0 {
return ""
}
keys := make([]string, 0, len(query))
for k := range query {
keys = append(keys, k)
}
sort.Strings(keys)
parts := make([]string, 0, len(keys))
for _, k := range keys {
values := query[k]
for _, v := range values {
parts = append(parts, urlEncode(k)+"="+urlEncode(v))
}
}
return strings.Join(parts, "&")
}
func urlEncode(s string) string {
s = url.QueryEscape(s)
s = strings.ReplaceAll(s, "+", "%20")
return s
}
// 获取规范头信息和已签名头列表
func getCanonicalHeadersFromMap(headers map[string][]string) (string, string) {
// 不可签名的头部列表
unsignableHeaders := map[string]bool{
"authorization": true,
"content-type": true,
"content-length": true,
"user-agent": true,
"presigned-expires": true,
"expect": true,
"x-amzn-trace-id": true,
}
headerValues := make(map[string]string)
var signedHeadersList []string
for k, v := range headers {
if len(v) == 0 {
continue
}
lowerKey := strings.ToLower(k)
// 检查是否可签名
if strings.HasPrefix(lowerKey, "x-amz-") || !unsignableHeaders[lowerKey] {
value := strings.TrimSpace(v[0])
value = strings.Join(strings.Fields(value), " ")
headerValues[lowerKey] = value
signedHeadersList = append(signedHeadersList, lowerKey)
}
}
sort.Strings(signedHeadersList)
var canonicalHeadersStr strings.Builder
for _, key := range signedHeadersList {
canonicalHeadersStr.WriteString(key)
canonicalHeadersStr.WriteString(":")
canonicalHeadersStr.WriteString(headerValues[key])
canonicalHeadersStr.WriteString("\n")
}
signedHeaders := strings.Join(signedHeadersList, ";")
return canonicalHeadersStr.String(), signedHeaders
}
// 计算HMAC-SHA256
func hmacSHA256(key []byte, data string) []byte {
h := hmac.New(sha256.New, key)
h.Write([]byte(data))
return h.Sum(nil)
}
// 计算HMAC-SHA256并返回十六进制字符串
func hmacSHA256Hex(key []byte, data string) string {
return hex.EncodeToString(hmacSHA256(key, data))
}
// 计算SHA256哈希并返回十六进制字符串
func hashSHA256(data string) string {
h := sha256.New()
h.Write([]byte(data))
return hex.EncodeToString(h.Sum(nil))
}
// 获取签名密钥
func getSigningKey(secretKey, dateStamp, region, service string) []byte {
kDate := hmacSHA256([]byte("AWS4"+secretKey), dateStamp)
kRegion := hmacSHA256(kDate, region)
kService := hmacSHA256(kRegion, service)
kSigning := hmacSHA256(kService, "aws4_request")
return kSigning
}
// generateContentDisposition 生成符合RFC 5987标准的Content-Disposition头部
func generateContentDisposition(filename string) string {
// 按照RFC 2047进行编码用于filename部分
encodedName := urlEncode(filename)
// 按照RFC 5987进行编码用于filename*部分
encodedNameRFC5987 := encodeRFC5987(filename)
return fmt.Sprintf("attachment; filename=\"%s\"; filename*=utf-8''%s",
encodedName, encodedNameRFC5987)
}
// encodeRFC5987 按照RFC 5987规范编码字符串适用于HTTP头部参数中的非ASCII字符
func encodeRFC5987(s string) string {
var buf strings.Builder
for _, r := range []byte(s) {
// 根据RFC 5987只有字母、数字和部分特殊符号可以不编码
if (r >= 'a' && r <= 'z') ||
(r >= 'A' && r <= 'Z') ||
(r >= '0' && r <= '9') ||
r == '-' || r == '.' || r == '_' || r == '~' {
buf.WriteByte(r)
} else {
// 其他字符都需要百分号编码
fmt.Fprintf(&buf, "%%%02X", r)
}
}
return buf.String()
}
func randomString() string {
const charset = "0123456789abcdefghijklmnopqrstuvwxyz"
const length = 11 // 11位随机字符串
var sb strings.Builder
sb.Grow(length)
for i := 0; i < length; i++ {
sb.WriteByte(charset[rand.Intn(len(charset))])
}
return sb.String()
}

View File

@ -0,0 +1,177 @@
package doubao_share
import (
"context"
"errors"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/go-resty/resty/v2"
"net/http"
)
type DoubaoShare struct {
model.Storage
Addition
RootFiles []RootFileList
}
func (d *DoubaoShare) Config() driver.Config {
return config
}
func (d *DoubaoShare) GetAddition() driver.Additional {
return &d.Addition
}
func (d *DoubaoShare) Init(ctx context.Context) error {
// 初始化 虚拟分享列表
if err := d.initShareList(); err != nil {
return err
}
return nil
}
func (d *DoubaoShare) Drop(ctx context.Context) error {
return nil
}
func (d *DoubaoShare) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
// 检查是否为根目录
if dir.GetID() == "" && dir.GetPath() == "/" {
return d.listRootDirectory(ctx)
}
// 非根目录,处理不同情况
if fo, ok := dir.(*FileObject); ok {
if fo.ShareID == "" {
// 虚拟目录,需要列出子目录
return d.listVirtualDirectoryContent(dir)
} else {
// 具有分享ID的目录获取此分享下的文件
shareId, relativePath, err := d._findShareAndPath(dir)
if err != nil {
return nil, err
}
return d.getFilesInPath(ctx, shareId, dir.GetID(), relativePath)
}
}
// 使用通用方法
shareId, relativePath, err := d._findShareAndPath(dir)
if err != nil {
return nil, err
}
// 获取指定路径下的文件
return d.getFilesInPath(ctx, shareId, dir.GetID(), relativePath)
}
func (d *DoubaoShare) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
var downloadUrl string
if u, ok := file.(*FileObject); ok {
switch u.NodeType {
case VideoType, AudioType:
var r GetVideoFileUrlResp
_, err := d.request("/samantha/media/get_play_info", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"key": u.Key,
"share_id": u.ShareID,
"node_id": file.GetID(),
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.OriginalMediaInfo.MainURL
default:
var r GetFileUrlResp
_, err := d.request("/alice/message/get_file_url", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
"uris": []string{u.Key},
"type": FileNodeType[u.NodeType],
})
}, &r)
if err != nil {
return nil, err
}
downloadUrl = r.Data.FileUrls[0].MainURL
}
// 生成标准的Content-Disposition
contentDisposition := generateContentDisposition(u.Name)
return &model.Link{
URL: downloadUrl,
Header: http.Header{
"User-Agent": []string{UserAgent},
"Content-Disposition": []string{contentDisposition},
},
}, nil
}
return nil, errors.New("can't convert obj to URL")
}
func (d *DoubaoShare) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
// TODO create folder, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// TODO move obj, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
// TODO rename obj, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
// TODO copy obj, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Remove(ctx context.Context, obj model.Obj) error {
// TODO remove obj, optional
return errs.NotImplement
}
func (d *DoubaoShare) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
// TODO upload file, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) GetArchiveMeta(ctx context.Context, obj model.Obj, args model.ArchiveArgs) (model.ArchiveMeta, error) {
// TODO get archive file meta-info, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) ListArchive(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) ([]model.Obj, error) {
// TODO list args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) Extract(ctx context.Context, obj model.Obj, args model.ArchiveInnerArgs) (*model.Link, error) {
// TODO return link of file args.InnerPath in the archive obj, return errs.NotImplement to use an internal archive tool, optional
return nil, errs.NotImplement
}
func (d *DoubaoShare) ArchiveDecompress(ctx context.Context, srcObj, dstDir model.Obj, args model.ArchiveDecompressArgs) ([]model.Obj, error) {
// TODO extract args.InnerPath path in the archive srcObj to the dstDir location, optional
// a folder with the same name as the archive file needs to be created to store the extracted results if args.PutIntoNewDir
// return errs.NotImplement to use an internal archive tool
return nil, errs.NotImplement
}
//func (d *DoubaoShare) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {
// return nil, errs.NotSupport
//}
var _ driver.Driver = (*DoubaoShare)(nil)

View File

@ -0,0 +1,32 @@
package doubao_share
import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/op"
)
type Addition struct {
driver.RootPath
Cookie string `json:"cookie" type:"text"`
ShareIds string `json:"share_ids" type:"text" required:"true"`
}
var config = driver.Config{
Name: "DoubaoShare",
LocalSort: true,
OnlyLocal: false,
OnlyProxy: false,
NoCache: false,
NoUpload: true,
NeedMs: false,
DefaultRoot: "/",
CheckStatus: false,
Alert: "",
NoOverwriteUpload: false,
}
func init() {
op.RegisterDriver(func() driver.Driver {
return &DoubaoShare{}
})
}

View File

@ -0,0 +1,207 @@
package doubao_share
import (
"encoding/json"
"fmt"
"github.com/alist-org/alist/v3/internal/model"
)
type BaseResp struct {
Code int `json:"code"`
Msg string `json:"msg"`
}
type NodeInfoData struct {
Share ShareInfo `json:"share,omitempty"`
Creator CreatorInfo `json:"creator,omitempty"`
NodeList []File `json:"node_list,omitempty"`
NodeInfo File `json:"node_info,omitempty"`
Children []File `json:"children,omitempty"`
Path FilePath `json:"path,omitempty"`
NextCursor string `json:"next_cursor,omitempty"`
HasMore bool `json:"has_more,omitempty"`
}
type NodeInfoResp struct {
BaseResp
NodeInfoData `json:"data"`
}
type RootFileList struct {
ShareID string
VirtualPath string
NodeInfo NodeInfoData
Child *[]RootFileList
}
type File struct {
ID string `json:"id"`
Name string `json:"name"`
Key string `json:"key"`
NodeType int `json:"node_type"`
Size int64 `json:"size"`
Source int `json:"source"`
NameReviewStatus int `json:"name_review_status"`
ContentReviewStatus int `json:"content_review_status"`
RiskReviewStatus int `json:"risk_review_status"`
ConversationID string `json:"conversation_id"`
ParentID string `json:"parent_id"`
CreateTime int64 `json:"create_time"`
UpdateTime int64 `json:"update_time"`
}
type FileObject struct {
model.Object
ShareID string
Key string
NodeID string
NodeType int
}
type ShareInfo struct {
ShareID string `json:"share_id"`
FirstNode struct {
ID string `json:"id"`
Name string `json:"name"`
Key string `json:"key"`
NodeType int `json:"node_type"`
Size int `json:"size"`
Source int `json:"source"`
Content struct {
LinkFileType string `json:"link_file_type"`
ImageWidth int `json:"image_width"`
ImageHeight int `json:"image_height"`
AiSkillStatus int `json:"ai_skill_status"`
} `json:"content"`
NameReviewStatus int `json:"name_review_status"`
ContentReviewStatus int `json:"content_review_status"`
RiskReviewStatus int `json:"risk_review_status"`
ConversationID string `json:"conversation_id"`
ParentID string `json:"parent_id"`
CreateTime int `json:"create_time"`
UpdateTime int `json:"update_time"`
} `json:"first_node"`
NodeCount int `json:"node_count"`
CreateTime int `json:"create_time"`
Channel string `json:"channel"`
InfluencerType int `json:"influencer_type"`
}
type CreatorInfo struct {
EntityID string `json:"entity_id"`
UserName string `json:"user_name"`
NickName string `json:"nick_name"`
Avatar struct {
OriginURL string `json:"origin_url"`
TinyURL string `json:"tiny_url"`
URI string `json:"uri"`
} `json:"avatar"`
}
type FilePath []struct {
ID string `json:"id"`
Name string `json:"name"`
Key string `json:"key"`
NodeType int `json:"node_type"`
Size int `json:"size"`
Source int `json:"source"`
NameReviewStatus int `json:"name_review_status"`
ContentReviewStatus int `json:"content_review_status"`
RiskReviewStatus int `json:"risk_review_status"`
ConversationID string `json:"conversation_id"`
ParentID string `json:"parent_id"`
CreateTime int `json:"create_time"`
UpdateTime int `json:"update_time"`
}
type GetFileUrlResp struct {
BaseResp
Data struct {
FileUrls []struct {
URI string `json:"uri"`
MainURL string `json:"main_url"`
BackURL string `json:"back_url"`
} `json:"file_urls"`
} `json:"data"`
}
type GetVideoFileUrlResp struct {
BaseResp
Data struct {
MediaType string `json:"media_type"`
MediaInfo []struct {
Meta struct {
Height string `json:"height"`
Width string `json:"width"`
Format string `json:"format"`
Duration float64 `json:"duration"`
CodecType string `json:"codec_type"`
Definition string `json:"definition"`
} `json:"meta"`
MainURL string `json:"main_url"`
BackupURL string `json:"backup_url"`
} `json:"media_info"`
OriginalMediaInfo struct {
Meta struct {
Height string `json:"height"`
Width string `json:"width"`
Format string `json:"format"`
Duration float64 `json:"duration"`
CodecType string `json:"codec_type"`
Definition string `json:"definition"`
} `json:"meta"`
MainURL string `json:"main_url"`
BackupURL string `json:"backup_url"`
} `json:"original_media_info"`
PosterURL string `json:"poster_url"`
PlayableStatus int `json:"playable_status"`
} `json:"data"`
}
type CommonResp struct {
Code int `json:"code"`
Msg string `json:"msg,omitempty"`
Message string `json:"message,omitempty"` // 错误情况下的消息
Data json.RawMessage `json:"data,omitempty"` // 原始数据,稍后解析
Error *struct {
Code int `json:"code"`
Message string `json:"message"`
Locale string `json:"locale"`
} `json:"error,omitempty"`
}
// IsSuccess 判断响应是否成功
func (r *CommonResp) IsSuccess() bool {
return r.Code == 0
}
// GetError 获取错误信息
func (r *CommonResp) GetError() error {
if r.IsSuccess() {
return nil
}
// 优先使用message字段
errMsg := r.Message
if errMsg == "" {
errMsg = r.Msg
}
// 如果error对象存在且有详细消息,则使用error中的信息
if r.Error != nil && r.Error.Message != "" {
errMsg = r.Error.Message
}
return fmt.Errorf("[doubao] API error (code: %d): %s", r.Code, errMsg)
}
// UnmarshalData 将data字段解析为指定类型
func (r *CommonResp) UnmarshalData(v interface{}) error {
if !r.IsSuccess() {
return r.GetError()
}
if len(r.Data) == 0 {
return nil
}
return json.Unmarshal(r.Data, v)
}

View File

@ -0,0 +1,744 @@
package doubao_share
import (
"context"
"encoding/json"
"fmt"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/model"
"github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus"
"net/http"
"net/url"
"path"
"regexp"
"strings"
"time"
)
const (
DirectoryType = 1
FileType = 2
LinkType = 3
ImageType = 4
PagesType = 5
VideoType = 6
AudioType = 7
MeetingMinutesType = 8
)
var FileNodeType = map[int]string{
1: "directory",
2: "file",
3: "link",
4: "image",
5: "pages",
6: "video",
7: "audio",
8: "meeting_minutes",
}
const (
BaseURL = "https://www.doubao.com"
FileDataType = "file"
ImgDataType = "image"
VideoDataType = "video"
UserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36"
)
func (d *DoubaoShare) request(path string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
reqUrl := BaseURL + path
req := base.RestyClient.R()
req.SetHeaders(map[string]string{
"Cookie": d.Cookie,
"User-Agent": UserAgent,
})
req.SetQueryParams(map[string]string{
"version_code": "20800",
"device_platform": "web",
})
if callback != nil {
callback(req)
}
var commonResp CommonResp
res, err := req.Execute(method, reqUrl)
log.Debugln(res.String())
if err != nil {
return nil, err
}
body := res.Body()
// 先解析为通用响应
if err = json.Unmarshal(body, &commonResp); err != nil {
return nil, err
}
// 检查响应是否成功
if !commonResp.IsSuccess() {
return body, commonResp.GetError()
}
if resp != nil {
if err = json.Unmarshal(body, resp); err != nil {
return body, err
}
}
return body, nil
}
func (d *DoubaoShare) getFiles(dirId, nodeId, cursor string) (resp []File, err error) {
var r NodeInfoResp
var body = base.Json{
"share_id": dirId,
"node_id": nodeId,
}
// 如果有游标,则设置游标和大小
if cursor != "" {
body["cursor"] = cursor
body["size"] = 50
} else {
body["need_full_path"] = false
}
_, err = d.request("/samantha/aispace/share/node_info", http.MethodPost, func(req *resty.Request) {
req.SetBody(body)
}, &r)
if err != nil {
return nil, err
}
if r.NodeInfoData.Children != nil {
resp = r.NodeInfoData.Children
}
if r.NodeInfoData.NextCursor != "-1" {
// 递归获取下一页
nextFiles, err := d.getFiles(dirId, nodeId, r.NodeInfoData.NextCursor)
if err != nil {
return nil, err
}
resp = append(r.NodeInfoData.Children, nextFiles...)
}
return resp, err
}
func (d *DoubaoShare) getShareOverview(shareId, cursor string) (resp []File, err error) {
return d.getShareOverviewWithHistory(shareId, cursor, make(map[string]bool))
}
func (d *DoubaoShare) getShareOverviewWithHistory(shareId, cursor string, cursorHistory map[string]bool) (resp []File, err error) {
var r NodeInfoResp
var body = base.Json{
"share_id": shareId,
}
// 如果有游标,则设置游标和大小
if cursor != "" {
body["cursor"] = cursor
body["size"] = 50
} else {
body["need_full_path"] = false
}
_, err = d.request("/samantha/aispace/share/overview", http.MethodPost, func(req *resty.Request) {
req.SetBody(body)
}, &r)
if err != nil {
return nil, err
}
if r.NodeInfoData.NodeList != nil {
resp = r.NodeInfoData.NodeList
}
if r.NodeInfoData.NextCursor != "-1" {
// 检查游标是否重复出现,防止无限循环
if cursorHistory[r.NodeInfoData.NextCursor] {
return resp, nil
}
// 记录当前游标
cursorHistory[r.NodeInfoData.NextCursor] = true
// 递归获取下一页
nextFiles, err := d.getShareOverviewWithHistory(shareId, r.NodeInfoData.NextCursor, cursorHistory)
if err != nil {
return nil, err
}
resp = append(resp, nextFiles...)
}
return resp, nil
}
func (d *DoubaoShare) initShareList() error {
if d.Addition.ShareIds == "" {
return fmt.Errorf("share_ids is empty")
}
// 解析分享配置
shareConfigs, rootShares, err := d._parseShareConfigs()
if err != nil {
return err
}
// 检查路径冲突
if err := d._detectPathConflicts(shareConfigs); err != nil {
return err
}
// 构建树形结构
rootMap := d._buildTreeStructure(shareConfigs, rootShares)
// 提取顶级节点
topLevelNodes := d._extractTopLevelNodes(rootMap, rootShares)
if len(topLevelNodes) == 0 {
return fmt.Errorf("no valid share_ids found")
}
// 存储结果
d.RootFiles = topLevelNodes
return nil
}
// 从配置中解析分享ID和路径
func (d *DoubaoShare) _parseShareConfigs() (map[string]string, []string, error) {
shareConfigs := make(map[string]string) // 路径 -> 分享ID
rootShares := make([]string, 0) // 根目录显示的分享ID
lines := strings.Split(strings.TrimSpace(d.Addition.ShareIds), "\n")
if len(lines) == 0 {
return nil, nil, fmt.Errorf("no share_ids found")
}
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" {
continue
}
// 解析分享ID和路径
parts := strings.Split(line, "|")
var shareId, sharePath string
if len(parts) == 1 {
// 无路径分享,直接在根目录显示
shareId = _extractShareId(parts[0])
if shareId != "" {
rootShares = append(rootShares, shareId)
}
continue
} else if len(parts) >= 2 {
shareId = _extractShareId(parts[0])
sharePath = strings.Trim(parts[1], "/")
}
if shareId == "" {
log.Warnf("[doubao_share] Invalid Share_id Format: %s", line)
continue
}
// 空路径也加入根目录显示
if sharePath == "" {
rootShares = append(rootShares, shareId)
continue
}
// 添加到路径映射
shareConfigs[sharePath] = shareId
}
return shareConfigs, rootShares, nil
}
// 检测路径冲突
func (d *DoubaoShare) _detectPathConflicts(shareConfigs map[string]string) error {
// 检查直接路径冲突
pathToShareIds := make(map[string][]string)
for sharePath, id := range shareConfigs {
pathToShareIds[sharePath] = append(pathToShareIds[sharePath], id)
}
for sharePath, ids := range pathToShareIds {
if len(ids) > 1 {
return fmt.Errorf("路径冲突: 路径 '%s' 被多个不同的分享ID使用: %s",
sharePath, strings.Join(ids, ", "))
}
}
// 检查层次冲突
for path1, id1 := range shareConfigs {
for path2, id2 := range shareConfigs {
if path1 == path2 || id1 == id2 {
continue
}
// 检查前缀冲突
if strings.HasPrefix(path2, path1+"/") || strings.HasPrefix(path1, path2+"/") {
return fmt.Errorf("路径冲突: 路径 '%s' (ID: %s) 与路径 '%s' (ID: %s) 存在层次冲突",
path1, id1, path2, id2)
}
}
}
return nil
}
// 构建树形结构
func (d *DoubaoShare) _buildTreeStructure(shareConfigs map[string]string, rootShares []string) map[string]*RootFileList {
rootMap := make(map[string]*RootFileList)
// 添加所有分享节点
for sharePath, shareId := range shareConfigs {
children := make([]RootFileList, 0)
rootMap[sharePath] = &RootFileList{
ShareID: shareId,
VirtualPath: sharePath,
NodeInfo: NodeInfoData{},
Child: &children,
}
}
// 构建父子关系
for sharePath, node := range rootMap {
if sharePath == "" {
continue
}
pathParts := strings.Split(sharePath, "/")
if len(pathParts) > 1 {
parentPath := strings.Join(pathParts[:len(pathParts)-1], "/")
// 确保所有父级路径都已创建
_ensurePathExists(rootMap, parentPath)
// 添加当前节点到父节点
if parent, exists := rootMap[parentPath]; exists {
*parent.Child = append(*parent.Child, *node)
}
}
}
return rootMap
}
// 提取顶级节点
func (d *DoubaoShare) _extractTopLevelNodes(rootMap map[string]*RootFileList, rootShares []string) []RootFileList {
var topLevelNodes []RootFileList
// 添加根目录分享
for _, shareId := range rootShares {
children := make([]RootFileList, 0)
topLevelNodes = append(topLevelNodes, RootFileList{
ShareID: shareId,
VirtualPath: "",
NodeInfo: NodeInfoData{},
Child: &children,
})
}
// 添加顶级目录
for rootPath, node := range rootMap {
if rootPath == "" {
continue
}
isTopLevel := true
pathParts := strings.Split(rootPath, "/")
if len(pathParts) > 1 {
parentPath := strings.Join(pathParts[:len(pathParts)-1], "/")
if _, exists := rootMap[parentPath]; exists {
isTopLevel = false
}
}
if isTopLevel {
topLevelNodes = append(topLevelNodes, *node)
}
}
return topLevelNodes
}
// 确保路径存在,创建所有必要的中间节点
func _ensurePathExists(rootMap map[string]*RootFileList, path string) {
if path == "" {
return
}
// 如果路径已存在,不需要再处理
if _, exists := rootMap[path]; exists {
return
}
// 创建当前路径节点
children := make([]RootFileList, 0)
rootMap[path] = &RootFileList{
ShareID: "",
VirtualPath: path,
NodeInfo: NodeInfoData{},
Child: &children,
}
// 处理父路径
pathParts := strings.Split(path, "/")
if len(pathParts) > 1 {
parentPath := strings.Join(pathParts[:len(pathParts)-1], "/")
// 确保父路径存在
_ensurePathExists(rootMap, parentPath)
// 将当前节点添加为父节点的子节点
if parent, exists := rootMap[parentPath]; exists {
*parent.Child = append(*parent.Child, *rootMap[path])
}
}
}
// _extractShareId 从URL或直接ID中提取分享ID
func _extractShareId(input string) string {
input = strings.TrimSpace(input)
if strings.HasPrefix(input, "http") {
regex := regexp.MustCompile(`/drive/s/([a-zA-Z0-9]+)`)
if matches := regex.FindStringSubmatch(input); len(matches) > 1 {
return matches[1]
}
return ""
}
return input // 直接返回ID
}
// _findRootFileByShareID 查找指定ShareID的配置
func _findRootFileByShareID(rootFiles []RootFileList, shareID string) *RootFileList {
for i, rf := range rootFiles {
if rf.ShareID == shareID {
return &rootFiles[i]
}
if rf.Child != nil && len(*rf.Child) > 0 {
if found := _findRootFileByShareID(*rf.Child, shareID); found != nil {
return found
}
}
}
return nil
}
// _findNodeByPath 查找指定路径的节点
func _findNodeByPath(rootFiles []RootFileList, path string) *RootFileList {
for i, rf := range rootFiles {
if rf.VirtualPath == path {
return &rootFiles[i]
}
if rf.Child != nil && len(*rf.Child) > 0 {
if found := _findNodeByPath(*rf.Child, path); found != nil {
return found
}
}
}
return nil
}
// _findShareByPath 根据路径查找分享和相对路径
func _findShareByPath(rootFiles []RootFileList, path string) (*RootFileList, string) {
// 完全匹配或子路径匹配
for i, rf := range rootFiles {
if rf.VirtualPath == path {
return &rootFiles[i], ""
}
if rf.VirtualPath != "" && strings.HasPrefix(path, rf.VirtualPath+"/") {
relPath := strings.TrimPrefix(path, rf.VirtualPath+"/")
// 先检查子节点
if rf.Child != nil && len(*rf.Child) > 0 {
if child, childPath := _findShareByPath(*rf.Child, path); child != nil {
return child, childPath
}
}
return &rootFiles[i], relPath
}
// 递归检查子节点
if rf.Child != nil && len(*rf.Child) > 0 {
if child, childPath := _findShareByPath(*rf.Child, path); child != nil {
return child, childPath
}
}
}
// 检查根目录分享
for i, rf := range rootFiles {
if rf.VirtualPath == "" && rf.ShareID != "" {
parts := strings.SplitN(path, "/", 2)
if len(parts) > 0 && parts[0] == rf.ShareID {
if len(parts) > 1 {
return &rootFiles[i], parts[1]
}
return &rootFiles[i], ""
}
}
}
return nil, ""
}
// _findShareAndPath 根据给定路径查找对应的ShareID和相对路径
func (d *DoubaoShare) _findShareAndPath(dir model.Obj) (string, string, error) {
dirPath := dir.GetPath()
// 如果是根目录,返回空值表示需要列出所有分享
if dirPath == "/" || dirPath == "" {
return "", "", nil
}
// 检查是否是 FileObject 类型,并获取 ShareID
if fo, ok := dir.(*FileObject); ok && fo.ShareID != "" {
// 直接使用对象中存储的 ShareID
// 计算相对路径(移除前导斜杠)
relativePath := strings.TrimPrefix(dirPath, "/")
// 递归查找对应的 RootFile
found := _findRootFileByShareID(d.RootFiles, fo.ShareID)
if found != nil {
if found.VirtualPath != "" {
// 如果此分享配置了路径前缀,需要考虑相对路径的计算
if strings.HasPrefix(relativePath, found.VirtualPath) {
return fo.ShareID, strings.TrimPrefix(relativePath, found.VirtualPath+"/"), nil
}
}
return fo.ShareID, relativePath, nil
}
// 如果找不到对应的 RootFile 配置,仍然使用对象中的 ShareID
return fo.ShareID, relativePath, nil
}
// 移除开头的斜杠
cleanPath := strings.TrimPrefix(dirPath, "/")
// 先检查是否有直接匹配的根目录分享
for _, rootFile := range d.RootFiles {
if rootFile.VirtualPath == "" && rootFile.ShareID != "" {
// 检查是否匹配当前路径的第一部分
parts := strings.SplitN(cleanPath, "/", 2)
if len(parts) > 0 && parts[0] == rootFile.ShareID {
if len(parts) > 1 {
return rootFile.ShareID, parts[1], nil
}
return rootFile.ShareID, "", nil
}
}
}
// 查找匹配此路径的分享或虚拟目录
share, relPath := _findShareByPath(d.RootFiles, cleanPath)
if share != nil {
return share.ShareID, relPath, nil
}
log.Warnf("[doubao_share] No matching share path found: %s", dirPath)
return "", "", fmt.Errorf("no matching share path found: %s", dirPath)
}
// convertToFileObject 将File转换为FileObject
func (d *DoubaoShare) convertToFileObject(file File, shareId string, relativePath string) *FileObject {
// 构建文件对象
obj := &FileObject{
Object: model.Object{
ID: file.ID,
Name: file.Name,
Size: file.Size,
Modified: time.Unix(file.UpdateTime, 0),
Ctime: time.Unix(file.CreateTime, 0),
IsFolder: file.NodeType == DirectoryType,
Path: path.Join(relativePath, file.Name),
},
ShareID: shareId,
Key: file.Key,
NodeID: file.ID,
NodeType: file.NodeType,
}
return obj
}
// getFilesInPath 获取指定分享和路径下的文件
func (d *DoubaoShare) getFilesInPath(ctx context.Context, shareId, nodeId, relativePath string) ([]model.Obj, error) {
var (
files []File
err error
)
// 调用overview接口获取分享链接信息 nodeId
if nodeId == "" {
files, err = d.getShareOverview(shareId, "")
if err != nil {
return nil, fmt.Errorf("failed to get share link information: %w", err)
}
result := make([]model.Obj, 0, len(files))
for _, file := range files {
result = append(result, d.convertToFileObject(file, shareId, "/"))
}
return result, nil
} else {
files, err = d.getFiles(shareId, nodeId, "")
if err != nil {
return nil, fmt.Errorf("failed to get share file: %w", err)
}
result := make([]model.Obj, 0, len(files))
for _, file := range files {
result = append(result, d.convertToFileObject(file, shareId, path.Join("/", relativePath)))
}
return result, nil
}
}
// listRootDirectory 处理根目录的内容展示
func (d *DoubaoShare) listRootDirectory(ctx context.Context) ([]model.Obj, error) {
objects := make([]model.Obj, 0)
// 分组处理:直接显示的分享内容 vs 虚拟目录
var directShareIDs []string
addedDirs := make(map[string]bool)
// 处理所有根节点
for _, rootFile := range d.RootFiles {
if rootFile.VirtualPath == "" && rootFile.ShareID != "" {
// 无路径分享记录ShareID以便后续获取内容
directShareIDs = append(directShareIDs, rootFile.ShareID)
} else {
// 有路径的分享,显示第一级目录
parts := strings.SplitN(rootFile.VirtualPath, "/", 2)
firstLevel := parts[0]
// 避免重复添加同名目录
if _, exists := addedDirs[firstLevel]; exists {
continue
}
// 创建虚拟目录对象
obj := &FileObject{
Object: model.Object{
ID: "",
Name: firstLevel,
Modified: time.Now(),
Ctime: time.Now(),
IsFolder: true,
Path: path.Join("/", firstLevel),
},
ShareID: rootFile.ShareID,
Key: "",
NodeID: "",
NodeType: DirectoryType,
}
objects = append(objects, obj)
addedDirs[firstLevel] = true
}
}
// 处理直接显示的分享内容
for _, shareID := range directShareIDs {
shareFiles, err := d.getFilesInPath(ctx, shareID, "", "")
if err != nil {
log.Warnf("[doubao_share] Failed to get list of files in share %s: %s", shareID, err)
continue
}
objects = append(objects, shareFiles...)
}
return objects, nil
}
// listVirtualDirectoryContent 列出虚拟目录的内容
func (d *DoubaoShare) listVirtualDirectoryContent(dir model.Obj) ([]model.Obj, error) {
dirPath := strings.TrimPrefix(dir.GetPath(), "/")
objects := make([]model.Obj, 0)
// 递归查找此路径的节点
node := _findNodeByPath(d.RootFiles, dirPath)
if node != nil && node.Child != nil {
// 显示此节点的所有子节点
for _, child := range *node.Child {
// 计算显示名称(取路径的最后一部分)
displayName := child.VirtualPath
if child.VirtualPath != "" {
parts := strings.Split(child.VirtualPath, "/")
displayName = parts[len(parts)-1]
} else if child.ShareID != "" {
displayName = child.ShareID
}
obj := &FileObject{
Object: model.Object{
ID: "",
Name: displayName,
Modified: time.Now(),
Ctime: time.Now(),
IsFolder: true,
Path: path.Join("/", child.VirtualPath),
},
ShareID: child.ShareID,
Key: "",
NodeID: "",
NodeType: DirectoryType,
}
objects = append(objects, obj)
}
}
return objects, nil
}
// generateContentDisposition 生成符合RFC 5987标准的Content-Disposition头部
func generateContentDisposition(filename string) string {
// 按照RFC 2047进行编码用于filename部分
encodedName := urlEncode(filename)
// 按照RFC 5987进行编码用于filename*部分
encodedNameRFC5987 := encodeRFC5987(filename)
return fmt.Sprintf("attachment; filename=\"%s\"; filename*=utf-8''%s",
encodedName, encodedNameRFC5987)
}
// encodeRFC5987 按照RFC 5987规范编码字符串适用于HTTP头部参数中的非ASCII字符
func encodeRFC5987(s string) string {
var buf strings.Builder
for _, r := range []byte(s) {
// 根据RFC 5987只有字母、数字和部分特殊符号可以不编码
if (r >= 'a' && r <= 'z') ||
(r >= 'A' && r <= 'Z') ||
(r >= '0' && r <= '9') ||
r == '-' || r == '.' || r == '_' || r == '~' {
buf.WriteByte(r)
} else {
// 其他字符都需要百分号编码
fmt.Fprintf(&buf, "%%%02X", r)
}
}
return buf.String()
}
func urlEncode(s string) string {
s = url.QueryEscape(s)
s = strings.ReplaceAll(s, "+", "%20")
return s
}

View File

@ -5,7 +5,6 @@ import (
"context"
"errors"
"fmt"
"io"
"strings"
"text/template"
"time"
@ -159,7 +158,7 @@ func signCommit(m *map[string]interface{}, entity *openpgp.Entity) (string, erro
if err != nil {
return "", err
}
if _, err = io.Copy(armorWriter, &sigBuffer); err != nil {
if _, err = utils.CopyWithBuffer(armorWriter, &sigBuffer); err != nil {
return "", err
}
_ = armorWriter.Close()

View File

@ -2,7 +2,6 @@ package template
import (
"context"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"fmt"
@ -17,6 +16,7 @@ import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/foxxorcat/mopan-sdk-go"
"github.com/go-resty/resty/v2"
@ -273,23 +273,14 @@ func (d *ILanZou) Remove(ctx context.Context, obj model.Obj) error {
const DefaultPartSize = 1024 * 1024 * 8
func (d *ILanZou) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
h := md5.New()
// need to calculate md5 of the full content
tempFile, err := s.CacheFullInTempFile()
if err != nil {
return nil, err
etag := s.GetHash().GetHash(utils.MD5)
var err error
if len(etag) != utils.MD5.Width {
_, etag, err = stream.CacheFullInTempFileAndHash(s, utils.MD5)
if err != nil {
return nil, err
}
}
defer func() {
_ = tempFile.Close()
}()
if _, err = utils.CopyWithBuffer(h, tempFile); err != nil {
return nil, err
}
_, err = tempFile.Seek(0, io.SeekStart)
if err != nil {
return nil, err
}
etag := hex.EncodeToString(h.Sum(nil))
// get upToken
res, err := d.proved("/7n/getUpToken", http.MethodPost, func(req *resty.Request) {
req.SetBody(base.Json{
@ -309,7 +300,7 @@ func (d *ILanZou) Put(ctx context.Context, dstDir model.Obj, s model.FileStreame
key := fmt.Sprintf("disk/%d/%d/%d/%s/%016d", now.Year(), now.Month(), now.Day(), d.account, now.UnixMilli())
reader := driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: &driver.SimpleReaderWithSize{
Reader: tempFile,
Reader: s,
Size: s.GetSize(),
},
UpdateProgress: up,

View File

@ -4,13 +4,12 @@ import (
"context"
"fmt"
"net/url"
stdpath "path"
"path/filepath"
"strings"
"path"
shell "github.com/ipfs/go-ipfs-api"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
shell "github.com/ipfs/go-ipfs-api"
)
type IPFS struct {
@ -43,85 +42,143 @@ func (d *IPFS) Drop(ctx context.Context) error {
}
func (d *IPFS) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]model.Obj, error) {
path := dir.GetPath()
if path[len(path):] != "/" {
path += "/"
var ipfsPath string
cid := dir.GetID()
if cid != "" {
ipfsPath = path.Join("/ipfs", cid)
} else {
// 可能出现ipns dns解析失败的情况需要重复获取cid其他情况应该不会出错
ipfsPath = dir.GetPath()
switch d.Mode {
case "ipfs":
ipfsPath = path.Join("/ipfs", ipfsPath)
case "ipns":
ipfsPath = path.Join("/ipns", ipfsPath)
case "mfs":
fileStat, err := d.sh.FilesStat(ctx, ipfsPath)
if err != nil {
return nil, err
}
ipfsPath = path.Join("/ipfs", fileStat.Hash)
default:
return nil, fmt.Errorf("mode error")
}
}
path_cid, err := d.sh.FilesStat(ctx, path)
if err != nil {
return nil, err
}
dirs, err := d.sh.List(path_cid.Hash)
dirs, err := d.sh.List(ipfsPath)
if err != nil {
return nil, err
}
objlist := []model.Obj{}
for _, file := range dirs {
gateurl := *d.gateURL
gateurl.Path = "ipfs/" + file.Hash
gateurl.RawQuery = "filename=" + url.PathEscape(file.Name)
objlist = append(objlist, &model.ObjectURL{
Object: model.Object{ID: file.Hash, Name: file.Name, Size: int64(file.Size), IsFolder: file.Type == 1},
Url: model.Url{Url: gateurl.String()},
})
objlist = append(objlist, &model.Object{ID: file.Hash, Name: file.Name, Size: int64(file.Size), IsFolder: file.Type == 1})
}
return objlist, nil
}
func (d *IPFS) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
link := d.Gateway + "/ipfs/" + file.GetID() + "/?filename=" + url.PathEscape(file.GetName())
return &model.Link{URL: link}, nil
gateurl := d.gateURL.JoinPath("/ipfs/", file.GetID())
gateurl.RawQuery = "filename=" + url.QueryEscape(file.GetName())
return &model.Link{URL: gateurl.String()}, nil
}
func (d *IPFS) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) error {
path := parentDir.GetPath()
if path[len(path):] != "/" {
path += "/"
func (d *IPFS) Get(ctx context.Context, rawPath string) (model.Obj, error) {
rawPath = path.Join(d.GetRootPath(), rawPath)
var ipfsPath string
switch d.Mode {
case "ipfs":
ipfsPath = path.Join("/ipfs", rawPath)
case "ipns":
ipfsPath = path.Join("/ipns", rawPath)
case "mfs":
fileStat, err := d.sh.FilesStat(ctx, rawPath)
if err != nil {
return nil, err
}
ipfsPath = path.Join("/ipfs", fileStat.Hash)
default:
return nil, fmt.Errorf("mode error")
}
return d.sh.FilesMkdir(ctx, path+dirName)
file, err := d.sh.FilesStat(ctx, ipfsPath)
if err != nil {
return nil, err
}
return &model.Object{ID: file.Hash, Name: path.Base(rawPath), Path: rawPath, Size: int64(file.Size), IsFolder: file.Type == "directory"}, nil
}
func (d *IPFS) Move(ctx context.Context, srcObj, dstDir model.Obj) error {
return d.sh.FilesMv(ctx, srcObj.GetPath(), dstDir.GetPath())
func (d *IPFS) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {
if d.Mode != "mfs" {
return nil, fmt.Errorf("only write in mfs mode")
}
dirPath := parentDir.GetPath()
err := d.sh.FilesMkdir(ctx, path.Join(dirPath, dirName), shell.FilesMkdir.Parents(true))
if err != nil {
return nil, err
}
file, err := d.sh.FilesStat(ctx, path.Join(dirPath, dirName))
if err != nil {
return nil, err
}
return &model.Object{ID: file.Hash, Name: dirName, Path: path.Join(dirPath, dirName), Size: int64(file.Size), IsFolder: true}, nil
}
func (d *IPFS) Rename(ctx context.Context, srcObj model.Obj, newName string) error {
newFileName := filepath.Dir(srcObj.GetPath()) + "/" + newName
return d.sh.FilesMv(ctx, srcObj.GetPath(), strings.ReplaceAll(newFileName, "\\", "/"))
func (d *IPFS) Move(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if d.Mode != "mfs" {
return nil, fmt.Errorf("only write in mfs mode")
}
dstPath := path.Join(dstDir.GetPath(), path.Base(srcObj.GetPath()))
d.sh.FilesRm(ctx, dstPath, true)
return &model.Object{ID: srcObj.GetID(), Name: srcObj.GetName(), Path: dstPath, Size: int64(srcObj.GetSize()), IsFolder: srcObj.IsDir()},
d.sh.FilesMv(ctx, srcObj.GetPath(), dstDir.GetPath())
}
func (d *IPFS) Copy(ctx context.Context, srcObj, dstDir model.Obj) error {
// TODO copy obj, optional
fmt.Println(srcObj.GetPath())
fmt.Println(dstDir.GetPath())
newFileName := dstDir.GetPath() + "/" + filepath.Base(srcObj.GetPath())
fmt.Println(newFileName)
return d.sh.FilesCp(ctx, srcObj.GetPath(), strings.ReplaceAll(newFileName, "\\", "/"))
func (d *IPFS) Rename(ctx context.Context, srcObj model.Obj, newName string) (model.Obj, error) {
if d.Mode != "mfs" {
return nil, fmt.Errorf("only write in mfs mode")
}
dstPath := path.Join(path.Dir(srcObj.GetPath()), newName)
d.sh.FilesRm(ctx, dstPath, true)
return &model.Object{ID: srcObj.GetID(), Name: newName, Path: dstPath, Size: int64(srcObj.GetSize()),
IsFolder: srcObj.IsDir()}, d.sh.FilesMv(ctx, srcObj.GetPath(), dstPath)
}
func (d *IPFS) Copy(ctx context.Context, srcObj, dstDir model.Obj) (model.Obj, error) {
if d.Mode != "mfs" {
return nil, fmt.Errorf("only write in mfs mode")
}
dstPath := path.Join(dstDir.GetPath(), path.Base(srcObj.GetPath()))
d.sh.FilesRm(ctx, dstPath, true)
return &model.Object{ID: srcObj.GetID(), Name: srcObj.GetName(), Path: dstPath, Size: int64(srcObj.GetSize()), IsFolder: srcObj.IsDir()},
d.sh.FilesCp(ctx, path.Join("/ipfs/", srcObj.GetID()), dstPath, shell.FilesCp.Parents(true))
}
func (d *IPFS) Remove(ctx context.Context, obj model.Obj) error {
// TODO remove obj, optional
if d.Mode != "mfs" {
return fmt.Errorf("only write in mfs mode")
}
return d.sh.FilesRm(ctx, obj.GetPath(), true)
}
func (d *IPFS) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) error {
// TODO upload file, optional
_, err := d.sh.Add(driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
func (d *IPFS) Put(ctx context.Context, dstDir model.Obj, s model.FileStreamer, up driver.UpdateProgress) (model.Obj, error) {
if d.Mode != "mfs" {
return nil, fmt.Errorf("only write in mfs mode")
}
outHash, err := d.sh.Add(driver.NewLimitedUploadStream(ctx, &driver.ReaderUpdatingProgress{
Reader: s,
UpdateProgress: up,
}), ToFiles(stdpath.Join(dstDir.GetPath(), s.GetName())))
return err
}
func ToFiles(dstDir string) shell.AddOpts {
return func(rb *shell.RequestBuilder) error {
rb.Option("to-files", dstDir)
return nil
}))
if err != nil {
return nil, err
}
dstPath := path.Join(dstDir.GetPath(), s.GetName())
if s.GetExist() != nil {
d.sh.FilesRm(ctx, dstPath, true)
}
err = d.sh.FilesCp(ctx, path.Join("/ipfs/", outHash), dstPath, shell.FilesCp.Parents(true))
gateurl := d.gateURL.JoinPath("/ipfs/", outHash)
gateurl.RawQuery = "filename=" + url.QueryEscape(s.GetName())
return &model.Object{ID: outHash, Name: s.GetName(), Path: dstPath, Size: int64(s.GetSize()), IsFolder: s.IsDir()}, err
}
//func (d *Template) Other(ctx context.Context, args model.OtherArgs) (interface{}, error) {

View File

@ -8,14 +8,16 @@ import (
type Addition struct {
// Usually one of two
driver.RootPath
Endpoint string `json:"endpoint" default:"http://127.0.0.1:5001"`
Gateway string `json:"gateway" default:"https://ipfs.io"`
Mode string `json:"mode" options:"ipfs,ipns,mfs" type:"select" required:"true"`
Endpoint string `json:"endpoint" default:"http://127.0.0.1:5001" required:"true"`
Gateway string `json:"gateway" default:"http://127.0.0.1:8080" required:"true"`
}
var config = driver.Config{
Name: "IPFS API",
DefaultRoot: "/",
LocalSort: true,
OnlyProxy: false,
}
func init() {

View File

@ -78,6 +78,42 @@ func RemoveNotes(html string) string {
})
}
// 清理JS注释
func RemoveJSComment(data string) string {
var result strings.Builder
inComment := false
inSingleLineComment := false
for i := 0; i < len(data); i++ {
v := data[i]
if inSingleLineComment && (v == '\n' || v == '\r') {
inSingleLineComment = false
result.WriteByte(v)
continue
}
if inComment && v == '*' && i+1 < len(data) && data[i+1] == '/' {
inComment = false
continue
}
if v == '/' && i+1 < len(data) {
nextChar := data[i+1]
if nextChar == '*' {
inComment = true
i++
continue
} else if nextChar == '/' {
inSingleLineComment = true
i++
continue
}
}
result.WriteByte(v)
}
return result.String()
}
var findAcwScV2Reg = regexp.MustCompile(`arg1='([0-9A-Z]+)'`)
// 在页面被过多访问或其他情况下有时候会先返回一个加密的页面其执行计算出一个acw_sc__v2后放入页面后再重新访问页面才能获得正常页面

View File

@ -348,6 +348,10 @@ func (d *LanZou) getFilesByShareUrl(shareID, pwd string, sharePageData string) (
file FileOrFolderByShareUrl
)
// 删除注释
sharePageData = RemoveNotes(sharePageData)
sharePageData = RemoveJSComment(sharePageData)
// 需要密码
if strings.Contains(sharePageData, "pwdload") || strings.Contains(sharePageData, "passwddiv") {
sharePageData, err := getJSFunctionByName(sharePageData, "down_p")

View File

@ -35,6 +35,10 @@ type Local struct {
// zero means no limit
thumbConcurrency int
thumbTokenBucket TokenBucket
// video thumb position
videoThumbPos float64
videoThumbPosIsPercentage bool
}
func (d *Local) Config() driver.Config {
@ -92,6 +96,8 @@ func (d *Local) Init(ctx context.Context) error {
if val < 0 || val > 100 {
return fmt.Errorf("invalid video_thumb_pos value: %s, the precentage must be a number between 0 and 100", d.VideoThumbPos)
}
d.videoThumbPosIsPercentage = true
d.videoThumbPos = val / 100
} else {
val, err := strconv.ParseFloat(d.VideoThumbPos, 64)
if err != nil {
@ -100,6 +106,8 @@ func (d *Local) Init(ctx context.Context) error {
if val < 0 {
return fmt.Errorf("invalid video_thumb_pos value: %s, the time must be a positive number", d.VideoThumbPos)
}
d.videoThumbPosIsPercentage = false
d.videoThumbPos = val
}
return nil
}

View File

@ -61,22 +61,14 @@ func (d *Local) GetSnapshot(videoPath string) (imgData *bytes.Buffer, err error)
}
var ss string
if strings.HasSuffix(d.VideoThumbPos, "%") {
percentage, err := strconv.ParseFloat(strings.TrimSuffix(d.VideoThumbPos, "%"), 64)
if err != nil {
return nil, err
}
ss = fmt.Sprintf("%f", totalDuration*percentage/100)
if d.videoThumbPosIsPercentage {
ss = fmt.Sprintf("%f", totalDuration*d.videoThumbPos)
} else {
val, err := strconv.ParseFloat(d.VideoThumbPos, 64)
if err != nil {
return nil, err
}
// If the value is greater than the total duration, use the total duration
if val > totalDuration {
if d.videoThumbPos > totalDuration {
ss = fmt.Sprintf("%f", totalDuration)
} else {
ss = d.VideoThumbPos
ss = fmt.Sprintf("%f", d.videoThumbPos)
}
}

View File

@ -56,12 +56,21 @@ func (d *Mega) List(ctx context.Context, dir model.Obj, args model.ListArgs) ([]
if err != nil {
return nil, err
}
res := make([]model.Obj, 0)
fn := make(map[string]model.Obj)
for i := range nodes {
n := nodes[i]
if n.GetType() == mega.FILE || n.GetType() == mega.FOLDER {
res = append(res, &MegaNode{n})
if n.GetType() != mega.FILE && n.GetType() != mega.FOLDER {
continue
}
if _, ok := fn[n.GetName()]; !ok {
fn[n.GetName()] = &MegaNode{n}
} else if sameNameObj := fn[n.GetName()]; (&MegaNode{n}).ModTime().After(sameNameObj.ModTime()) {
fn[n.GetName()] = &MegaNode{n}
}
}
res := make([]model.Obj, 0)
for _, v := range fn {
res = append(res, v)
}
return res, nil
}

View File

@ -269,9 +269,6 @@ func (d *MoPan) Put(ctx context.Context, dstDir model.Obj, stream model.FileStre
if err != nil {
return nil, err
}
defer func() {
_ = file.Close()
}()
// step.1
uploadPartData, err := mopan.InitUploadPartData(ctx, mopan.UpdloadFileParam{
@ -315,9 +312,6 @@ func (d *MoPan) Put(ctx context.Context, dstDir model.Obj, stream model.FileStre
if utils.IsCanceled(upCtx) {
break
}
if err = sem.Acquire(ctx, 1); err != nil {
break
}
i, part, byteSize := i, part, initUpdload.PartSize
if part.PartNumber == uploadPartData.PartTotal {
byteSize = initUpdload.LastPartSize
@ -325,6 +319,9 @@ func (d *MoPan) Put(ctx context.Context, dstDir model.Obj, stream model.FileStre
// step.4
threadG.Go(func(ctx context.Context) error {
if err = sem.Acquire(ctx, 1); err != nil {
return err
}
defer sem.Release(1)
reader := io.NewSectionReader(file, int64(part.PartNumber-1)*initUpdload.PartSize, byteSize)
req, err := part.NewRequest(ctx, driver.NewLimitedUploadStream(ctx, reader))

View File

@ -2,13 +2,13 @@ package netease_music
import (
"context"
"github.com/alist-org/alist/v3/internal/driver"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/sign"
"github.com/alist-org/alist/v3/pkg/http_range"
@ -28,8 +28,8 @@ type SongResp struct {
}
type ListResp struct {
Size string `json:"size"`
MaxSize string `json:"maxSize"`
Size int64 `json:"size"`
MaxSize int64 `json:"maxSize"`
Data []struct {
AddTime int64 `json:"addTime"`
FileName string `json:"fileName"`

View File

@ -227,7 +227,6 @@ func (d *NeteaseMusic) putSongStream(ctx context.Context, stream model.FileStrea
if err != nil {
return err
}
defer tmp.Close()
u := uploader{driver: d, file: tmp}

View File

@ -8,6 +8,7 @@ import (
"io"
"net/http"
stdpath "path"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
@ -17,7 +18,6 @@ import (
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go"
log "github.com/sirupsen/logrus"
)
var onedriveHostMap = map[string]Host{
@ -204,23 +204,22 @@ func (d *Onedrive) upBig(ctx context.Context, dstDir model.Obj, stream model.Fil
uploadUrl := jsoniter.Get(res, "uploadUrl").ToString()
var finish int64 = 0
DEFAULT := d.ChunkSize * 1024 * 1024
retryCount := 0
maxRetries := 3
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
log.Debugf("upload: %d", finish)
var byteSize int64 = DEFAULT
left := stream.GetSize() - finish
if left < DEFAULT {
byteSize = left
}
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[Onedrive] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
log.Debug(err, n)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
@ -228,19 +227,31 @@ func (d *Onedrive) upBig(ctx context.Context, dstDir model.Obj, stream model.Fil
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", finish, finish+byteSize-1, stream.GetSize()))
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
// https://learn.microsoft.com/zh-cn/onedrive/developer/rest-api/api/driveitem_createuploadsession
if res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200 {
switch {
case res.StatusCode >= 500 && res.StatusCode <= 504:
retryCount++
if retryCount > maxRetries {
res.Body.Close()
return fmt.Errorf("upload failed after %d retries due to server errors, error %d", maxRetries, res.StatusCode)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[Onedrive] server errors %d while uploading, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200:
data, _ := io.ReadAll(res.Body)
res.Body.Close()
return errors.New(string(data))
default:
res.Body.Close()
retryCount = 0
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
}
res.Body.Close()
up(float64(finish) * 100 / float64(stream.GetSize()))
}
return nil
}

View File

@ -8,6 +8,7 @@ import (
"io"
"net/http"
stdpath "path"
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
@ -17,7 +18,6 @@ import (
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go"
log "github.com/sirupsen/logrus"
)
var onedriveHostMap = map[string]Host{
@ -154,23 +154,22 @@ func (d *OnedriveAPP) upBig(ctx context.Context, dstDir model.Obj, stream model.
uploadUrl := jsoniter.Get(res, "uploadUrl").ToString()
var finish int64 = 0
DEFAULT := d.ChunkSize * 1024 * 1024
retryCount := 0
maxRetries := 3
for finish < stream.GetSize() {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
log.Debugf("upload: %d", finish)
var byteSize int64 = DEFAULT
left := stream.GetSize() - finish
if left < DEFAULT {
byteSize = left
}
byteSize := min(left, DEFAULT)
utils.Log.Debugf("[OnedriveAPP] upload range: %d-%d/%d", finish, finish+byteSize-1, stream.GetSize())
byteData := make([]byte, byteSize)
n, err := io.ReadFull(stream, byteData)
log.Debug(err, n)
utils.Log.Debug(err, n)
if err != nil {
return err
}
req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(byteData)))
req, err := http.NewRequest("PUT", uploadUrl, driver.NewLimitedUploadStream(ctx, bytes.NewReader(byteData)))
if err != nil {
return err
}
@ -178,19 +177,31 @@ func (d *OnedriveAPP) upBig(ctx context.Context, dstDir model.Obj, stream model.
req.ContentLength = byteSize
// req.Header.Set("Content-Length", strconv.Itoa(int(byteSize)))
req.Header.Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", finish, finish+byteSize-1, stream.GetSize()))
finish += byteSize
res, err := base.HttpClient.Do(req)
if err != nil {
return err
}
// https://learn.microsoft.com/zh-cn/onedrive/developer/rest-api/api/driveitem_createuploadsession
if res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200 {
switch {
case res.StatusCode >= 500 && res.StatusCode <= 504:
retryCount++
if retryCount > maxRetries {
res.Body.Close()
return fmt.Errorf("upload failed after %d retries due to server errors, error %d", maxRetries, res.StatusCode)
}
backoff := time.Duration(1<<retryCount) * time.Second
utils.Log.Warnf("[OnedriveAPP] server errors %d while uploading, retrying after %v...", res.StatusCode, backoff)
time.Sleep(backoff)
case res.StatusCode != 201 && res.StatusCode != 202 && res.StatusCode != 200:
data, _ := io.ReadAll(res.Body)
res.Body.Close()
return errors.New(string(data))
default:
res.Body.Close()
retryCount = 0
finish += byteSize
up(float64(finish) * 100 / float64(stream.GetSize()))
}
res.Body.Close()
up(float64(finish) * 100 / float64(stream.GetSize()))
}
return nil
}

View File

@ -69,7 +69,7 @@ func (d *PikPak) Init(ctx context.Context) (err error) {
d.ClientVersion = PCClientVersion
d.PackageName = PCPackageName
d.Algorithms = PCAlgorithms
d.UserAgent = "MainWindow Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) PikPak/2.5.6.4831 Chrome/100.0.4896.160 Electron/18.3.15 Safari/537.36"
d.UserAgent = "MainWindow Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) PikPak/2.6.11.4955 Chrome/100.0.4896.160 Electron/18.3.15 Safari/537.36"
}
if d.Addition.CaptchaToken != "" && d.Addition.RefreshToken == "" {

View File

@ -7,13 +7,6 @@ import (
"crypto/sha1"
"encoding/hex"
"fmt"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/aliyun/aliyun-oss-go-sdk/oss"
jsoniter "github.com/json-iterator/go"
"github.com/pkg/errors"
"io"
"net/http"
"path/filepath"
@ -24,38 +17,43 @@ import (
"time"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/aliyun/aliyun-oss-go-sdk/oss"
"github.com/go-resty/resty/v2"
jsoniter "github.com/json-iterator/go"
"github.com/pkg/errors"
)
var AndroidAlgorithms = []string{
"7xOq4Z8s",
"QE9/9+IQco",
"WdX5J9CPLZp",
"NmQ5qFAXqH3w984cYhMeC5TJR8j",
"cc44M+l7GDhav",
"KxGjo/wHB+Yx8Lf7kMP+/m9I+",
"wla81BUVSmDkctHDpUT",
"c6wMr1sm1WxiR3i8LDAm3W",
"hRLrEQCFNYi0PFPV",
"o1J41zIraDtJPNuhBu7Ifb/q3",
"U",
"RrbZvV0CTu3gaZJ56PVKki4IeP",
"NNuRbLckJqUp1Do0YlrKCUP",
"UUwnBbipMTvInA0U0E9",
"VzGc",
"SOP04dGzk0TNO7t7t9ekDbAmx+eq0OI1ovEx",
"nVBjhYiND4hZ2NCGyV5beamIr7k6ifAsAbl",
"Ddjpt5B/Cit6EDq2a6cXgxY9lkEIOw4yC1GDF28KrA",
"VVCogcmSNIVvgV6U+AochorydiSymi68YVNGiz",
"u5ujk5sM62gpJOsB/1Gu/zsfgfZO",
"dXYIiBOAHZgzSruaQ2Nhrqc2im",
"z5jUTBSIpBN9g4qSJGlidNAutX6",
"KJE2oveZ34du/g1tiimm",
}
var WebAlgorithms = []string{
"fyZ4+p77W1U4zcWBUwefAIFhFxvADWtT1wzolCxhg9q7etmGUjXr",
"uSUX02HYJ1IkyLdhINEFcCf7l2",
"iWt97bqD/qvjIaPXB2Ja5rsBWtQtBZZmaHH2rMR41",
"3binT1s/5a1pu3fGsN",
"8YCCU+AIr7pg+yd7CkQEY16lDMwi8Rh4WNp5",
"DYS3StqnAEKdGddRP8CJrxUSFh",
"crquW+4",
"ryKqvW9B9hly+JAymXCIfag5Z",
"Hr08T/NDTX1oSJfHk90c",
"i",
"C9qPpZLN8ucRTaTiUMWYS9cQvWOE",
"+r6CQVxjzJV6LCV",
"F",
"pFJRC",
"9WXYIDGrwTCz2OiVlgZa90qpECPD6olt",
"/750aCr4lm/Sly/c",
"RB+DT/gZCrbV",
"",
"CyLsf7hdkIRxRm215hl",
"7xHvLi2tOYP0Y92b",
"ZGTXXxu8E/MIWaEDB+Sm/",
"1UI3",
"E7fP5Pfijd+7K+t6Tg/NhuLq0eEUVChpJSkrKxpO",
"ihtqpG6FMt65+Xk+tWUH2",
"NhXXU9rg4XXdzo7u5o",
}
var PCAlgorithms = []string{
@ -80,17 +78,17 @@ const (
const (
AndroidClientID = "YNxT9w7GMdWvEOKa"
AndroidClientSecret = "dbw2OtmVEeuUvIptb1Coyg"
AndroidClientVersion = "1.49.3"
AndroidClientVersion = "1.53.2"
AndroidPackageName = "com.pikcloud.pikpak"
AndroidSdkVersion = "2.0.4.204101"
AndroidSdkVersion = "2.0.6.206003"
WebClientID = "YUMx5nI8ZU8Ap8pm"
WebClientSecret = "dbw2OtmVEeuUvIptb1Coyg"
WebClientVersion = "undefined"
WebPackageName = "drive.mypikpak.com"
WebClientVersion = "2.0.0"
WebPackageName = "mypikpak.com"
WebSdkVersion = "8.0.3"
PCClientID = "YvtoWO6GNHiuCl7x"
PCClientSecret = "1NIH5R1IEe2pAxZE3hv3uA"
PCClientVersion = "undefined" // 2.5.6.4831
PCClientVersion = "undefined" // 2.6.11.4955
PCPackageName = "mypikpak.com"
PCSdkVersion = "8.0.3"
)
@ -518,7 +516,7 @@ func (d *PikPak) UploadByMultipart(ctx context.Context, params *S3Params, fileSi
continue
}
b := driver.NewLimitedUploadStream(ctx, bytes.NewBuffer(buf))
b := driver.NewLimitedUploadStream(ctx, bytes.NewReader(buf))
if part, err = bucket.UploadPart(imur, b, chunk.Size, chunk.Number, OssOption(params)...); err == nil {
break
}

View File

@ -66,7 +66,7 @@ func (d *PikPakShare) Init(ctx context.Context) error {
d.ClientVersion = PCClientVersion
d.PackageName = PCPackageName
d.Algorithms = PCAlgorithms
d.UserAgent = "MainWindow Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) PikPak/2.5.6.4831 Chrome/100.0.4896.160 Electron/18.3.15 Safari/537.36"
d.UserAgent = "MainWindow Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) PikPak/2.6.11.4955 Chrome/100.0.4896.160 Electron/18.3.15 Safari/537.36"
}
// 获取CaptchaToken

View File

@ -17,34 +17,32 @@ import (
)
var AndroidAlgorithms = []string{
"7xOq4Z8s",
"QE9/9+IQco",
"WdX5J9CPLZp",
"NmQ5qFAXqH3w984cYhMeC5TJR8j",
"cc44M+l7GDhav",
"KxGjo/wHB+Yx8Lf7kMP+/m9I+",
"wla81BUVSmDkctHDpUT",
"c6wMr1sm1WxiR3i8LDAm3W",
"hRLrEQCFNYi0PFPV",
"o1J41zIraDtJPNuhBu7Ifb/q3",
"U",
"RrbZvV0CTu3gaZJ56PVKki4IeP",
"NNuRbLckJqUp1Do0YlrKCUP",
"UUwnBbipMTvInA0U0E9",
"VzGc",
"SOP04dGzk0TNO7t7t9ekDbAmx+eq0OI1ovEx",
"nVBjhYiND4hZ2NCGyV5beamIr7k6ifAsAbl",
"Ddjpt5B/Cit6EDq2a6cXgxY9lkEIOw4yC1GDF28KrA",
"VVCogcmSNIVvgV6U+AochorydiSymi68YVNGiz",
"u5ujk5sM62gpJOsB/1Gu/zsfgfZO",
"dXYIiBOAHZgzSruaQ2Nhrqc2im",
"z5jUTBSIpBN9g4qSJGlidNAutX6",
"KJE2oveZ34du/g1tiimm",
}
var WebAlgorithms = []string{
"fyZ4+p77W1U4zcWBUwefAIFhFxvADWtT1wzolCxhg9q7etmGUjXr",
"uSUX02HYJ1IkyLdhINEFcCf7l2",
"iWt97bqD/qvjIaPXB2Ja5rsBWtQtBZZmaHH2rMR41",
"3binT1s/5a1pu3fGsN",
"8YCCU+AIr7pg+yd7CkQEY16lDMwi8Rh4WNp5",
"DYS3StqnAEKdGddRP8CJrxUSFh",
"crquW+4",
"ryKqvW9B9hly+JAymXCIfag5Z",
"Hr08T/NDTX1oSJfHk90c",
"i",
"C9qPpZLN8ucRTaTiUMWYS9cQvWOE",
"+r6CQVxjzJV6LCV",
"F",
"pFJRC",
"9WXYIDGrwTCz2OiVlgZa90qpECPD6olt",
"/750aCr4lm/Sly/c",
"RB+DT/gZCrbV",
"",
"CyLsf7hdkIRxRm215hl",
"7xHvLi2tOYP0Y92b",
"ZGTXXxu8E/MIWaEDB+Sm/",
"1UI3",
"E7fP5Pfijd+7K+t6Tg/NhuLq0eEUVChpJSkrKxpO",
"ihtqpG6FMt65+Xk+tWUH2",
"NhXXU9rg4XXdzo7u5o",
}
var PCAlgorithms = []string{
@ -63,17 +61,17 @@ var PCAlgorithms = []string{
const (
AndroidClientID = "YNxT9w7GMdWvEOKa"
AndroidClientSecret = "dbw2OtmVEeuUvIptb1Coyg"
AndroidClientVersion = "1.49.3"
AndroidClientVersion = "1.53.2"
AndroidPackageName = "com.pikcloud.pikpak"
AndroidSdkVersion = "2.0.4.204101"
AndroidSdkVersion = "2.0.6.206003"
WebClientID = "YUMx5nI8ZU8Ap8pm"
WebClientSecret = "dbw2OtmVEeuUvIptb1Coyg"
WebClientVersion = "undefined"
WebPackageName = "drive.mypikpak.com"
WebClientVersion = "2.0.0"
WebPackageName = "mypikpak.com"
WebSdkVersion = "8.0.3"
PCClientID = "YvtoWO6GNHiuCl7x"
PCClientSecret = "1NIH5R1IEe2pAxZE3hv3uA"
PCClientVersion = "undefined" // 2.5.6.4831
PCClientVersion = "undefined" // 2.6.11.4955
PCPackageName = "mypikpak.com"
PCSdkVersion = "8.0.3"
)

View File

@ -3,9 +3,8 @@ package quark
import (
"bytes"
"context"
"crypto/md5"
"crypto/sha1"
"encoding/hex"
"hash"
"io"
"net/http"
"time"
@ -14,6 +13,7 @@ import (
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
streamPkg "github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/go-resty/resty/v2"
log "github.com/sirupsen/logrus"
@ -74,7 +74,7 @@ func (d *QuarkOrUC) Link(ctx context.Context, file model.Obj, args model.LinkArg
"Referer": []string{d.conf.referer},
"User-Agent": []string{ua},
},
Concurrency: 2,
Concurrency: 3,
PartSize: 10 * utils.MB,
}, nil
}
@ -136,33 +136,33 @@ func (d *QuarkOrUC) Remove(ctx context.Context, obj model.Obj) error {
}
func (d *QuarkOrUC) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
tempFile, err := stream.CacheFullInTempFile()
if err != nil {
return err
md5Str, sha1Str := stream.GetHash().GetHash(utils.MD5), stream.GetHash().GetHash(utils.SHA1)
var (
md5 hash.Hash
sha1 hash.Hash
)
writers := []io.Writer{}
if len(md5Str) != utils.MD5.Width {
md5 = utils.MD5.NewFunc()
writers = append(writers, md5)
}
defer func() {
_ = tempFile.Close()
}()
m := md5.New()
_, err = utils.CopyWithBuffer(m, tempFile)
if err != nil {
return err
if len(sha1Str) != utils.SHA1.Width {
sha1 = utils.SHA1.NewFunc()
writers = append(writers, sha1)
}
_, err = tempFile.Seek(0, io.SeekStart)
if err != nil {
return err
if len(writers) > 0 {
_, err := streamPkg.CacheFullInTempFileAndWriter(stream, io.MultiWriter(writers...))
if err != nil {
return err
}
if md5 != nil {
md5Str = hex.EncodeToString(md5.Sum(nil))
}
if sha1 != nil {
sha1Str = hex.EncodeToString(sha1.Sum(nil))
}
}
md5Str := hex.EncodeToString(m.Sum(nil))
s := sha1.New()
_, err = utils.CopyWithBuffer(s, tempFile)
if err != nil {
return err
}
_, err = tempFile.Seek(0, io.SeekStart)
if err != nil {
return err
}
sha1Str := hex.EncodeToString(s.Sum(nil))
// pre
pre, err := d.upPre(stream, dstDir.GetID())
if err != nil {
@ -178,27 +178,28 @@ func (d *QuarkOrUC) Put(ctx context.Context, dstDir model.Obj, stream model.File
return nil
}
// part up
partSize := pre.Metadata.PartSize
var part []byte
md5s := make([]string, 0)
defaultBytes := make([]byte, partSize)
total := stream.GetSize()
left := total
partSize := int64(pre.Metadata.PartSize)
part := make([]byte, partSize)
count := int(total / partSize)
if total%partSize > 0 {
count++
}
md5s := make([]string, 0, count)
partNumber := 1
for left > 0 {
if utils.IsCanceled(ctx) {
return ctx.Err()
}
if left > int64(partSize) {
part = defaultBytes
} else {
part = make([]byte, left)
if left < partSize {
part = part[:left]
}
_, err := io.ReadFull(tempFile, part)
n, err := io.ReadFull(stream, part)
if err != nil {
return err
}
left -= int64(len(part))
left -= int64(n)
log.Debugf("left: %d", left)
reader := driver.NewLimitedUploadStream(ctx, bytes.NewReader(part))
m, err := d.upPart(ctx, pre, stream.GetMimetype(), partNumber, reader)

View File

@ -125,7 +125,6 @@ func (d *QuarkUCTV) List(ctx context.Context, dir model.Obj, args model.ListArgs
}
func (d *QuarkUCTV) Link(ctx context.Context, file model.Obj, args model.LinkArgs) (*model.Link, error) {
files := &model.Link{}
var fileLink FileLink
_, err := d.request(ctx, "/file", "GET", func(req *resty.Request) {
req.SetQueryParams(map[string]string{
@ -139,8 +138,12 @@ func (d *QuarkUCTV) Link(ctx context.Context, file model.Obj, args model.LinkArg
if err != nil {
return nil, err
}
files.URL = fileLink.Data.DownloadURL
return files, nil
return &model.Link{
URL: fileLink.Data.DownloadURL,
Concurrency: 3,
PartSize: 10 * utils.MB,
}, nil
}
func (d *QuarkUCTV) MakeDir(ctx context.Context, parentDir model.Obj, dirName string) (model.Obj, error) {

View File

@ -316,7 +316,7 @@ func (d *Quqi) Put(ctx context.Context, dstDir model.Obj, stream model.FileStrea
// if the file already exists in Quqi server, there is no need to actually upload it
if uploadInitResp.Data.Exist {
// the file name returned by Quqi does not include the extension name
nodeName, nodeExt := uploadInitResp.Data.NodeName, rawExt(stream.GetName())
nodeName, nodeExt := uploadInitResp.Data.NodeName, utils.Ext(stream.GetName())
if nodeExt != "" {
nodeName = nodeName + "." + nodeExt
}
@ -432,7 +432,7 @@ func (d *Quqi) Put(ctx context.Context, dstDir model.Obj, stream model.FileStrea
return nil, err
}
// the file name returned by Quqi does not include the extension name
nodeName, nodeExt := uploadFinishResp.Data.NodeName, rawExt(stream.GetName())
nodeName, nodeExt := uploadFinishResp.Data.NodeName, utils.Ext(stream.GetName())
if nodeExt != "" {
nodeName = nodeName + "." + nodeExt
}

View File

@ -9,7 +9,6 @@ import (
"io"
"net/http"
"net/url"
stdpath "path"
"strings"
"time"
@ -115,16 +114,6 @@ func (d *Quqi) checkLogin() bool {
return true
}
// rawExt 保留扩展名大小写
func rawExt(name string) string {
ext := stdpath.Ext(name)
if strings.HasPrefix(ext, ".") {
ext = ext[1:]
}
return ext
}
// decryptKey 获取密码
func decryptKey(encodeKey string) []byte {
// 移除非法字符

View File

@ -12,6 +12,7 @@ import (
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
hash_extend "github.com/alist-org/alist/v3/pkg/utils/hash"
"github.com/aws/aws-sdk-go/aws"
@ -44,26 +45,29 @@ func (x *Thunder) Init(ctx context.Context) (err error) {
Common: &Common{
client: base.NewRestyClient(),
Algorithms: []string{
"HPxr4BVygTQVtQkIMwQH33ywbgYG5l4JoR",
"GzhNkZ8pOBsCY+7",
"v+l0ImTpG7c7/",
"e5ztohgVXNP",
"t",
"EbXUWyVVqQbQX39Mbjn2geok3/0WEkAVxeqhtx857++kjJiRheP8l77gO",
"o7dvYgbRMOpHXxCs",
"6MW8TD8DphmakaxCqVrfv7NReRRN7ck3KLnXBculD58MvxjFRqT+",
"kmo0HxCKVfmxoZswLB4bVA/dwqbVAYghSb",
"j",
"4scKJNdd7F27Hv7tbt",
"9uJNVj/wLmdwKrJaVj/omlQ",
"Oz64Lp0GigmChHMf/6TNfxx7O9PyopcczMsnf",
"Eb+L7Ce+Ej48u",
"jKY0",
"ASr0zCl6v8W4aidjPK5KHd1Lq3t+vBFf41dqv5+fnOd",
"wQlozdg6r1qxh0eRmt3QgNXOvSZO6q/GXK",
"gmirk+ciAvIgA/cxUUCema47jr/YToixTT+Q6O",
"5IiCoM9B1/788ntB",
"P07JH0h6qoM6TSUAK2aL9T5s2QBVeY9JWvalf",
"+oK0AN",
},
DeviceID: utils.GetMD5EncodeStr(x.Username + x.Password),
DeviceID: func() string {
if len(x.DeviceID) != 32 {
return utils.GetMD5EncodeStr(x.DeviceID)
}
return x.DeviceID
}(),
ClientID: "Xp6vsxz_7IYVw2BB",
ClientSecret: "Xp6vsy4tN9toTVdMSpomVdXpRmES",
ClientVersion: "7.51.0.8196",
ClientVersion: "8.31.0.9726",
PackageName: "com.xunlei.downloadprovider",
UserAgent: "ANDROID-com.xunlei.downloadprovider/7.51.0.8196 netWorkType/5G appid/40 deviceName/Xiaomi_M2004j7ac deviceModel/M2004J7AC OSVersion/12 protocolVersion/301 platformVersion/10 sdkVersion/220200 Oauth2Client/0.9 (Linux 4_14_186-perf-gddfs8vbb238b) (JAVA 0)",
UserAgent: "ANDROID-com.xunlei.downloadprovider/8.31.0.9726 netWorkType/5G appid/40 deviceName/Xiaomi_M2004j7ac deviceModel/M2004J7AC OSVersion/12 protocolVersion/301 platformVersion/10 sdkVersion/512000 Oauth2Client/0.9 (Linux 4_14_186-perf-gddfs8vbb238b) (JAVA 0)",
DownloadUserAgent: "Dalvik/2.1.0 (Linux; U; Android 12; M2004J7AC Build/SP1A.210812.016)",
refreshCTokenCk: func(token string) {
x.CaptchaToken = token
op.MustSaveDriverStorage(x)
@ -79,6 +83,8 @@ func (x *Thunder) Init(ctx context.Context) (err error) {
x.GetStorage().SetStatus(fmt.Sprintf("%+v", err.Error()))
op.MustSaveDriverStorage(x)
}
// 清空 信任密钥
x.Addition.CreditKey = ""
}
x.SetTokenResp(token)
return err
@ -92,6 +98,17 @@ func (x *Thunder) Init(ctx context.Context) (err error) {
x.SetCaptchaToken(ctoekn)
}
if x.Addition.CreditKey != "" {
x.SetCreditKey(x.Addition.CreditKey)
}
if x.Addition.DeviceID != "" {
x.Common.DeviceID = x.Addition.DeviceID
} else {
x.Addition.DeviceID = x.Common.DeviceID
op.MustSaveDriverStorage(x)
}
// 防止重复登录
identity := x.GetIdentity()
if x.identity != identity || !x.IsLogin() {
@ -101,6 +118,8 @@ func (x *Thunder) Init(ctx context.Context) (err error) {
if err != nil {
return err
}
// 清空 信任密钥
x.Addition.CreditKey = ""
x.SetTokenResp(token)
}
return nil
@ -160,6 +179,17 @@ func (x *ThunderExpert) Init(ctx context.Context) (err error) {
x.SetCaptchaToken(x.CaptchaToken)
}
if x.ExpertAddition.CreditKey != "" {
x.SetCreditKey(x.ExpertAddition.CreditKey)
}
if x.ExpertAddition.DeviceID != "" {
x.Common.DeviceID = x.ExpertAddition.DeviceID
} else {
x.ExpertAddition.DeviceID = x.Common.DeviceID
op.MustSaveDriverStorage(x)
}
// 签名方法
if x.SignType == "captcha_sign" {
x.Common.Timestamp = x.Timestamp
@ -193,6 +223,8 @@ func (x *ThunderExpert) Init(ctx context.Context) (err error) {
if err != nil {
return err
}
// 清空 信任密钥
x.ExpertAddition.CreditKey = ""
x.SetTokenResp(token)
x.SetRefreshTokenFunc(func() error {
token, err := x.XunLeiCommon.RefreshToken(x.TokenResp.RefreshToken)
@ -201,6 +233,8 @@ func (x *ThunderExpert) Init(ctx context.Context) (err error) {
if err != nil {
x.GetStorage().SetStatus(fmt.Sprintf("%+v", err.Error()))
}
// 清空 信任密钥
x.ExpertAddition.CreditKey = ""
}
x.SetTokenResp(token)
op.MustSaveDriverStorage(x)
@ -232,7 +266,8 @@ func (x *ThunderExpert) SetTokenResp(token *TokenResp) {
type XunLeiCommon struct {
*Common
*TokenResp // 登录信息
*TokenResp // 登录信息
*CoreLoginResp // core登录信息
refreshTokenFunc func() error
}
@ -333,22 +368,17 @@ func (xc *XunLeiCommon) Remove(ctx context.Context, obj model.Obj) error {
}
func (xc *XunLeiCommon) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
hi := file.GetHash()
gcid := hi.GetHash(hash_extend.GCID)
gcid := file.GetHash().GetHash(hash_extend.GCID)
var err error
if len(gcid) < hash_extend.GCID.Width {
tFile, err := file.CacheFullInTempFile()
if err != nil {
return err
}
gcid, err = utils.HashFile(hash_extend.GCID, tFile, file.GetSize())
_, gcid, err = stream.CacheFullInTempFileAndHash(file, hash_extend.GCID, file.GetSize())
if err != nil {
return err
}
}
var resp UploadTaskResponse
_, err := xc.Request(FILE_API_URL, http.MethodPost, func(r *resty.Request) {
_, err = xc.Request(FILE_API_URL, http.MethodPost, func(r *resty.Request) {
r.SetContext(ctx)
r.SetBody(&base.Json{
"kind": FILE,
@ -437,6 +467,10 @@ func (xc *XunLeiCommon) SetTokenResp(tr *TokenResp) {
xc.TokenResp = tr
}
func (xc *XunLeiCommon) SetCoreTokenResp(tr *CoreLoginResp) {
xc.CoreLoginResp = tr
}
// 携带Authorization和CaptchaToken的请求
func (xc *XunLeiCommon) Request(url string, method string, callback base.ReqCallback, resp interface{}) ([]byte, error) {
data, err := xc.Common.Request(url, method, func(req *resty.Request) {
@ -465,7 +499,7 @@ func (xc *XunLeiCommon) Request(url string, method string, callback base.ReqCall
}
return nil, err
case 9: // 验证码token过期
if err = xc.RefreshCaptchaTokenAtLogin(GetAction(method, url), xc.UserID); err != nil {
if err = xc.RefreshCaptchaTokenAtLogin(GetAction(method, url), xc.TokenResp.UserID); err != nil {
return nil, err
}
default:
@ -497,20 +531,25 @@ func (xc *XunLeiCommon) RefreshToken(refreshToken string) (*TokenResp, error) {
// 登录
func (xc *XunLeiCommon) Login(username, password string) (*TokenResp, error) {
url := XLUSER_API_URL + "/auth/signin"
err := xc.RefreshCaptchaTokenInLogin(GetAction(http.MethodPost, url), username)
//v3 login拿到 sessionID
sessionID, err := xc.CoreLogin(username, password)
if err != nil {
return nil, err
}
//v1 login拿到令牌
url := XLUSER_API_URL + "/auth/signin/token"
if err = xc.RefreshCaptchaTokenInLogin(GetAction(http.MethodPost, url), username); err != nil {
return nil, err
}
var resp TokenResp
_, err = xc.Common.Request(url, http.MethodPost, func(req *resty.Request) {
req.SetPathParam("client_id", xc.ClientID)
req.SetBody(&SignInRequest{
CaptchaToken: xc.GetCaptchaToken(),
ClientID: xc.ClientID,
ClientSecret: xc.ClientSecret,
Username: username,
Password: password,
Provider: SignProvider,
SigninToken: sessionID,
})
}, &resp)
if err != nil {
@ -586,3 +625,48 @@ func (xc *XunLeiCommon) DeleteOfflineTasks(ctx context.Context, taskIDs []string
}
return nil
}
func (xc *XunLeiCommon) CoreLogin(username string, password string) (sessionID string, err error) {
url := XLUSER_API_BASE_URL + "/xluser.core.login/v3/login"
var resp CoreLoginResp
res, err := xc.Common.Request(url, http.MethodPost, func(req *resty.Request) {
req.SetHeader("User-Agent", "android-ok-http-client/xl-acc-sdk/version-5.0.12.512000")
req.SetBody(&CoreLoginRequest{
ProtocolVersion: "301",
SequenceNo: "1000012",
PlatformVersion: "10",
IsCompressed: "0",
Appid: APPID,
ClientVersion: "8.31.0.9726",
PeerID: "00000000000000000000000000000000",
AppName: "ANDROID-com.xunlei.downloadprovider",
SdkVersion: "512000",
Devicesign: generateDeviceSign(xc.DeviceID, xc.PackageName),
NetWorkType: "WIFI",
ProviderName: "NONE",
DeviceModel: "M2004J7AC",
DeviceName: "Xiaomi_M2004j7ac",
OSVersion: "12",
Creditkey: xc.GetCreditKey(),
Hl: "zh-CN",
UserName: username,
PassWord: password,
VerifyKey: "",
VerifyCode: "",
IsMd5Pwd: "0",
})
}, nil)
if err != nil {
return "", err
}
if err = utils.Json.Unmarshal(res, &resp); err != nil {
return "", err
}
xc.SetCoreTokenResp(&resp)
sessionID = resp.SessionID
return sessionID, nil
}

View File

@ -23,23 +23,25 @@ type ExpertAddition struct {
RefreshToken string `json:"refresh_token" required:"true" help:"login type is refresh_token,this is required"`
// 签名方法1
Algorithms string `json:"algorithms" required:"true" help:"sign type is algorithms,this is required" default:"HPxr4BVygTQVtQkIMwQH33ywbgYG5l4JoR,GzhNkZ8pOBsCY+7,v+l0ImTpG7c7/,e5ztohgVXNP,t,EbXUWyVVqQbQX39Mbjn2geok3/0WEkAVxeqhtx857++kjJiRheP8l77gO,o7dvYgbRMOpHXxCs,6MW8TD8DphmakaxCqVrfv7NReRRN7ck3KLnXBculD58MvxjFRqT+,kmo0HxCKVfmxoZswLB4bVA/dwqbVAYghSb,j,4scKJNdd7F27Hv7tbt"`
Algorithms string `json:"algorithms" required:"true" help:"sign type is algorithms,this is required" default:"9uJNVj/wLmdwKrJaVj/omlQ,Oz64Lp0GigmChHMf/6TNfxx7O9PyopcczMsnf,Eb+L7Ce+Ej48u,jKY0,ASr0zCl6v8W4aidjPK5KHd1Lq3t+vBFf41dqv5+fnOd,wQlozdg6r1qxh0eRmt3QgNXOvSZO6q/GXK,gmirk+ciAvIgA/cxUUCema47jr/YToixTT+Q6O,5IiCoM9B1/788ntB,P07JH0h6qoM6TSUAK2aL9T5s2QBVeY9JWvalf,+oK0AN"`
// 签名方法2
CaptchaSign string `json:"captcha_sign" required:"true" help:"sign type is captcha_sign,this is required"`
Timestamp string `json:"timestamp" required:"true" help:"sign type is captcha_sign,this is required"`
// 验证码
CaptchaToken string `json:"captcha_token"`
// 信任密钥
CreditKey string `json:"credit_key" help:"credit key,used for login"`
// 必要且影响登录,由签名决定
DeviceID string `json:"device_id" required:"true" default:"9aa5c268e7bcfc197a9ad88e2fb330e5"`
DeviceID string `json:"device_id" default:""`
ClientID string `json:"client_id" required:"true" default:"Xp6vsxz_7IYVw2BB"`
ClientSecret string `json:"client_secret" required:"true" default:"Xp6vsy4tN9toTVdMSpomVdXpRmES"`
ClientVersion string `json:"client_version" required:"true" default:"7.51.0.8196"`
ClientVersion string `json:"client_version" required:"true" default:"8.31.0.9726"`
PackageName string `json:"package_name" required:"true" default:"com.xunlei.downloadprovider"`
//不影响登录,影响下载速度
UserAgent string `json:"user_agent" required:"true" default:"ANDROID-com.xunlei.downloadprovider/7.51.0.8196 netWorkType/4G appid/40 deviceName/Xiaomi_M2004j7ac deviceModel/M2004J7AC OSVersion/12 protocolVersion/301 platformVersion/10 sdkVersion/220200 Oauth2Client/0.9 (Linux 4_14_186-perf-gdcf98eab238b) (JAVA 0)"`
UserAgent string `json:"user_agent" required:"true" default:"ANDROID-com.xunlei.downloadprovider/8.31.0.9726 netWorkType/5G appid/40 deviceName/Xiaomi_M2004j7ac deviceModel/M2004J7AC OSVersion/12 protocolVersion/301 platformVersion/10 sdkVersion/512000 Oauth2Client/0.9 (Linux 4_14_186-perf-gddfs8vbb238b) (JAVA 0)"`
DownloadUserAgent string `json:"download_user_agent" required:"true" default:"Dalvik/2.1.0 (Linux; U; Android 12; M2004J7AC Build/SP1A.210812.016)"`
//优先使用视频链接代替下载链接
@ -74,6 +76,10 @@ type Addition struct {
Username string `json:"username" required:"true"`
Password string `json:"password" required:"true"`
CaptchaToken string `json:"captcha_token"`
// 信任密钥
CreditKey string `json:"credit_key" help:"credit key,used for login"`
// 登录设备ID
DeviceID string `json:"device_id" default:""`
}
// 登录特征,用于判断是否重新登录

View File

@ -18,6 +18,10 @@ type ErrResp struct {
}
func (e *ErrResp) IsError() bool {
if e.ErrorMsg == "success" {
return false
}
return e.ErrorCode != 0 || e.ErrorMsg != "" || e.ErrorDescription != ""
}
@ -61,13 +65,79 @@ func (t *TokenResp) Token() string {
}
type SignInRequest struct {
CaptchaToken string `json:"captcha_token"`
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
Username string `json:"username"`
Password string `json:"password"`
Provider string `json:"provider"`
SigninToken string `json:"signin_token"`
}
type CoreLoginRequest struct {
ProtocolVersion string `json:"protocolVersion"`
SequenceNo string `json:"sequenceNo"`
PlatformVersion string `json:"platformVersion"`
IsCompressed string `json:"isCompressed"`
Appid string `json:"appid"`
ClientVersion string `json:"clientVersion"`
PeerID string `json:"peerID"`
AppName string `json:"appName"`
SdkVersion string `json:"sdkVersion"`
Devicesign string `json:"devicesign"`
NetWorkType string `json:"netWorkType"`
ProviderName string `json:"providerName"`
DeviceModel string `json:"deviceModel"`
DeviceName string `json:"deviceName"`
OSVersion string `json:"OSVersion"`
Creditkey string `json:"creditkey"`
Hl string `json:"hl"`
UserName string `json:"userName"`
PassWord string `json:"passWord"`
VerifyKey string `json:"verifyKey"`
VerifyCode string `json:"verifyCode"`
IsMd5Pwd string `json:"isMd5Pwd"`
}
type CoreLoginResp struct {
Account string `json:"account"`
Creditkey string `json:"creditkey"`
/* Error string `json:"error"`
ErrorCode string `json:"errorCode"`
ErrorDescription string `json:"error_description"`*/
ExpiresIn int `json:"expires_in"`
IsCompressed string `json:"isCompressed"`
IsSetPassWord string `json:"isSetPassWord"`
KeepAliveMinPeriod string `json:"keepAliveMinPeriod"`
KeepAlivePeriod string `json:"keepAlivePeriod"`
LoginKey string `json:"loginKey"`
NickName string `json:"nickName"`
PlatformVersion string `json:"platformVersion"`
ProtocolVersion string `json:"protocolVersion"`
SecureKey string `json:"secureKey"`
SequenceNo string `json:"sequenceNo"`
SessionID string `json:"sessionID"`
Timestamp string `json:"timestamp"`
UserID string `json:"userID"`
UserName string `json:"userName"`
UserNewNo string `json:"userNewNo"`
Version string `json:"version"`
/* VipList []struct {
ExpireDate string `json:"expireDate"`
IsAutoDeduct string `json:"isAutoDeduct"`
IsVip string `json:"isVip"`
IsYear string `json:"isYear"`
PayID string `json:"payId"`
PayName string `json:"payName"`
Register string `json:"register"`
Vasid string `json:"vasid"`
VasType string `json:"vasType"`
VipDayGrow string `json:"vipDayGrow"`
VipGrow string `json:"vipGrow"`
VipLevel string `json:"vipLevel"`
Icon struct {
General string `json:"general"`
Small string `json:"small"`
} `json:"icon"`
} `json:"vipList"`*/
}
/*
@ -251,3 +321,29 @@ type Params struct {
PredictSpeed string `json:"predict_speed"`
PredictType string `json:"predict_type"`
}
// LoginReviewResp 登录验证响应
type LoginReviewResp struct {
Creditkey string `json:"creditkey"`
Error string `json:"error"`
ErrorCode string `json:"errorCode"`
ErrorDesc string `json:"errorDesc"`
ErrorDescURL string `json:"errorDescUrl"`
ErrorIsRetry int `json:"errorIsRetry"`
ErrorDescription string `json:"error_description"`
IsCompressed string `json:"isCompressed"`
PlatformVersion string `json:"platformVersion"`
ProtocolVersion string `json:"protocolVersion"`
Reviewurl string `json:"reviewurl"`
SequenceNo string `json:"sequenceNo"`
UserID string `json:"userID"`
VerifyType string `json:"verifyType"`
}
// ReviewData 验证数据
type ReviewData struct {
Creditkey string `json:"creditkey"`
Reviewurl string `json:"reviewurl"`
Deviceid string `json:"deviceid"`
Devicesign string `json:"devicesign"`
}

View File

@ -1,8 +1,10 @@
package thunder
import (
"crypto/md5"
"crypto/sha1"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"net/http"
@ -15,10 +17,11 @@ import (
)
const (
API_URL = "https://api-pan.xunlei.com/drive/v1"
FILE_API_URL = API_URL + "/files"
TASK_API_URL = API_URL + "/tasks"
XLUSER_API_URL = "https://xluser-ssl.xunlei.com/v1"
API_URL = "https://api-pan.xunlei.com/drive/v1"
FILE_API_URL = API_URL + "/files"
TASK_API_URL = API_URL + "/tasks"
XLUSER_API_BASE_URL = "https://xluser-ssl.xunlei.com"
XLUSER_API_URL = XLUSER_API_BASE_URL + "/v1"
)
const (
@ -34,6 +37,12 @@ const (
UPLOAD_TYPE_URL = "UPLOAD_TYPE_URL"
)
const (
SignProvider = "access_end_point_token"
APPID = "40"
APPKey = "34a062aaa22f906fca4fefe9fb3a3021"
)
func GetAction(method string, url string) string {
urlpath := regexp.MustCompile(`://[^/]+((/[^/\s?#]+)*)`).FindStringSubmatch(url)[1]
return method + ":" + urlpath
@ -44,6 +53,8 @@ type Common struct {
captchaToken string
creditKey string
// 签名相关,二选一
Algorithms []string
Timestamp, CaptchaSign string
@ -69,6 +80,13 @@ func (c *Common) GetCaptchaToken() string {
return c.captchaToken
}
func (c *Common) SetCreditKey(creditKey string) {
c.creditKey = creditKey
}
func (c *Common) GetCreditKey() string {
return c.creditKey
}
// 刷新验证码token(登录后)
func (c *Common) RefreshCaptchaTokenAtLogin(action, userID string) error {
metas := map[string]string{
@ -170,12 +188,53 @@ func (c *Common) Request(url, method string, callback base.ReqCallback, resp int
var erron ErrResp
utils.Json.Unmarshal(res.Body(), &erron)
if erron.IsError() {
// review_panel 表示需要短信验证码进行验证
if erron.ErrorMsg == "review_panel" {
return nil, c.getReviewData(res)
}
return nil, &erron
}
return res.Body(), nil
}
// 获取验证所需内容
func (c *Common) getReviewData(res *resty.Response) error {
var reviewResp LoginReviewResp
var reviewData ReviewData
if err := utils.Json.Unmarshal(res.Body(), &reviewResp); err != nil {
return err
}
deviceSign := generateDeviceSign(c.DeviceID, c.PackageName)
reviewData = ReviewData{
Creditkey: reviewResp.Creditkey,
Reviewurl: reviewResp.Reviewurl + "&deviceid=" + deviceSign,
Deviceid: deviceSign,
Devicesign: deviceSign,
}
// 将reviewData转为JSON字符串
reviewDataJSON, _ := json.MarshalIndent(reviewData, "", " ")
//reviewDataJSON, _ := json.Marshal(reviewData)
return fmt.Errorf(`
<div style="font-family: Arial, sans-serif; padding: 15px; border-radius: 5px; border: 1px solid #e0e0e0;>
<h3 style="color: #d9534f; margin-top: 0;">
<span style="font-size: 16px;">🔒 本次登录需要验证</span><br>
<span style="font-size: 14px; font-weight: normal; color: #666;">This login requires verification</span>
</h3>
<p style="font-size: 14px; margin-bottom: 15px;">下面是验证所需要的数据具体使用方法请参照对应的驱动文档<br>
<span style="color: #666; font-size: 13px;">Below are the relevant verification data. For specific usage methods, please refer to the corresponding driver documentation.</span></p>
<div style="border: 1px solid #ddd; border-radius: 4px; padding: 10px; overflow-x: auto; font-family: 'Courier New', monospace; font-size: 13px;">
<pre style="margin: 0; white-space: pre-wrap;"><code>%s</code></pre>
</div>
</div>`, string(reviewDataJSON))
}
// 计算文件Gcid
func getGcid(r io.Reader, size int64) (string, error) {
calcBlockSize := func(j int64) int64 {
@ -201,3 +260,24 @@ func getGcid(r io.Reader, size int64) (string, error) {
}
return hex.EncodeToString(hash1.Sum(nil)), nil
}
func generateDeviceSign(deviceID, packageName string) string {
signatureBase := fmt.Sprintf("%s%s%s%s", deviceID, packageName, APPID, APPKey)
sha1Hash := sha1.New()
sha1Hash.Write([]byte(signatureBase))
sha1Result := sha1Hash.Sum(nil)
sha1String := hex.EncodeToString(sha1Result)
md5Hash := md5.New()
md5Hash.Write([]byte(sha1String))
md5Result := md5Hash.Sum(nil)
md5String := hex.EncodeToString(md5Result)
deviceSign := fmt.Sprintf("div101.%s%s", deviceID, md5String)
return deviceSign
}

View File

@ -4,10 +4,15 @@ import (
"context"
"errors"
"fmt"
"io"
"net/http"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
streamPkg "github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
hash_extend "github.com/alist-org/alist/v3/pkg/utils/hash"
"github.com/aws/aws-sdk-go/aws"
@ -15,9 +20,6 @@ import (
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
"github.com/go-resty/resty/v2"
"io"
"net/http"
"strings"
)
type ThunderBrowser struct {
@ -456,15 +458,10 @@ func (xc *XunLeiBrowserCommon) Remove(ctx context.Context, obj model.Obj) error
}
func (xc *XunLeiBrowserCommon) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
hi := stream.GetHash()
gcid := hi.GetHash(hash_extend.GCID)
gcid := stream.GetHash().GetHash(hash_extend.GCID)
var err error
if len(gcid) < hash_extend.GCID.Width {
tFile, err := stream.CacheFullInTempFile()
if err != nil {
return err
}
gcid, err = utils.HashFile(hash_extend.GCID, tFile, stream.GetSize())
_, gcid, err = streamPkg.CacheFullInTempFileAndHash(stream, hash_extend.GCID, stream.GetSize())
if err != nil {
return err
}
@ -481,7 +478,7 @@ func (xc *XunLeiBrowserCommon) Put(ctx context.Context, dstDir model.Obj, stream
}
var resp UploadTaskResponse
_, err := xc.Request(FILE_API_URL, http.MethodPost, func(r *resty.Request) {
_, err = xc.Request(FILE_API_URL, http.MethodPost, func(r *resty.Request) {
r.SetContext(ctx)
r.SetBody(&js)
}, &resp)

View File

@ -3,11 +3,15 @@ package thunderx
import (
"context"
"fmt"
"net/http"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/op"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
hash_extend "github.com/alist-org/alist/v3/pkg/utils/hash"
"github.com/aws/aws-sdk-go/aws"
@ -15,8 +19,6 @@ import (
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
"github.com/go-resty/resty/v2"
"net/http"
"strings"
)
type ThunderX struct {
@ -364,22 +366,17 @@ func (xc *XunLeiXCommon) Remove(ctx context.Context, obj model.Obj) error {
}
func (xc *XunLeiXCommon) Put(ctx context.Context, dstDir model.Obj, file model.FileStreamer, up driver.UpdateProgress) error {
hi := file.GetHash()
gcid := hi.GetHash(hash_extend.GCID)
gcid := file.GetHash().GetHash(hash_extend.GCID)
var err error
if len(gcid) < hash_extend.GCID.Width {
tFile, err := file.CacheFullInTempFile()
if err != nil {
return err
}
gcid, err = utils.HashFile(hash_extend.GCID, tFile, file.GetSize())
_, gcid, err = stream.CacheFullInTempFileAndHash(file, hash_extend.GCID, file.GetSize())
if err != nil {
return err
}
}
var resp UploadTaskResponse
_, err := xc.Request(FILE_API_URL, http.MethodPost, func(r *resty.Request) {
_, err = xc.Request(FILE_API_URL, http.MethodPost, func(r *resty.Request) {
r.SetContext(ctx)
r.SetBody(&base.Json{
"kind": FILE,

View File

@ -243,7 +243,25 @@ func (d *Urls) PutURL(ctx context.Context, dstDir model.Obj, name, url string) (
}
func (d *Urls) Put(ctx context.Context, dstDir model.Obj, stream model.FileStreamer, up driver.UpdateProgress) error {
return errs.UploadNotSupported
if !d.Writable {
return errs.PermissionDenied
}
d.mutex.Lock()
defer d.mutex.Unlock()
node := GetNodeFromRootByPath(d.root, dstDir.GetPath()) // parent
if node == nil {
return errs.ObjectNotFound
}
if node.isFile() {
return errs.NotFolder
}
file, err := parseFileLine(stream.GetName(), d.HeadSize)
if err != nil {
return err
}
node.Children = append(node.Children, file)
d.updateStorage()
return nil
}
func (d *Urls) updateStorage() {

View File

@ -8,9 +8,7 @@ import (
"fmt"
"io"
"net/http"
"path"
"strconv"
"strings"
"github.com/alist-org/alist/v3/drivers/base"
"github.com/alist-org/alist/v3/internal/driver"
@ -151,7 +149,7 @@ func (d *Vtencent) ApplyUploadUGC(signature string, stream model.FileStreamer) (
form := base.Json{
"signature": signature,
"videoName": stream.GetName(),
"videoType": strings.ReplaceAll(path.Ext(stream.GetName()), ".", ""),
"videoType": utils.Ext(stream.GetName()),
"videoSize": stream.GetSize(),
}
var resps RspApplyUploadUGC

View File

@ -1,4 +1,4 @@
#!/bin/bash
#!/bin/sh
umask ${UMASK}

36
go.mod
View File

@ -1,8 +1,6 @@
module github.com/alist-org/alist/v3
go 1.23
toolchain go1.23.1
go 1.23.4
require (
github.com/KirCute/ftpserverlib-pasvportmap v1.25.0
@ -67,10 +65,10 @@ require (
github.com/xhofe/wopan-sdk-go v0.1.3
github.com/yeka/zip v0.0.0-20231116150916-03d6312748a9
github.com/zzzhr1990/go-common-entity v0.0.0-20221216044934-fd1c571e3a22
golang.org/x/crypto v0.31.0
golang.org/x/crypto v0.36.0
golang.org/x/exp v0.0.0-20240904232852-e7e105dedf7e
golang.org/x/image v0.19.0
golang.org/x/net v0.28.0
golang.org/x/net v0.38.0
golang.org/x/oauth2 v0.22.0
golang.org/x/time v0.8.0
google.golang.org/appengine v1.6.8
@ -81,13 +79,19 @@ require (
gorm.io/gorm v1.25.11
)
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.0 // indirect
)
require (
github.com/STARRY-S/zip v0.2.1 // indirect
github.com/aymerick/douceur v0.2.0 // indirect
github.com/blevesearch/go-faiss v1.0.20 // indirect
github.com/blevesearch/zapx/v16 v16.1.5 // indirect
github.com/bodgit/plumbing v1.3.0 // indirect
github.com/bodgit/sevenzip v1.6.0 // indirect
github.com/bodgit/sevenzip v1.6.0
github.com/bodgit/windows v1.0.1 // indirect
github.com/bytedance/sonic/loader v0.1.1 // indirect
github.com/charmbracelet/x/ansi v0.2.3 // indirect
@ -107,14 +111,16 @@ require (
github.com/klauspost/pgzip v1.2.6 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/matoous/go-nanoid/v2 v2.1.0 // indirect
github.com/microcosm-cc/bluemonday v1.0.27
github.com/nwaples/rardecode/v2 v2.0.0-beta.4.0.20241112120701-034e449c6e78 // indirect
github.com/microcosm-cc/bluemonday v1.0.27
github.com/nwaples/rardecode/v2 v2.0.0-beta.4.0.20241112120701-034e449c6e78
github.com/sorairolake/lzip-go v0.3.5 // indirect
github.com/taruti/bytepool v0.0.0-20160310082835-5e3a9ea56543 // indirect
github.com/therootcompany/xz v1.0.1 // indirect
github.com/ulikunitz/xz v0.5.12 // indirect
github.com/yuin/goldmark v1.7.8
go4.org v0.0.0-20230225012048-214862532bf5 // indirect
github.com/xhofe/115-sdk-go v0.1.5
github.com/yuin/goldmark v1.7.8
go4.org v0.0.0-20230225012048-214862532bf5
resty.dev/v3 v3.0.0-beta.2 // indirect
)
require (
@ -246,10 +252,10 @@ require (
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.etcd.io/bbolt v1.3.8 // indirect
golang.org/x/arch v0.8.0 // indirect
golang.org/x/sync v0.10.0
golang.org/x/sys v0.28.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.21.0
golang.org/x/sync v0.12.0
golang.org/x/sys v0.31.0 // indirect
golang.org/x/term v0.30.0 // indirect
golang.org/x/text v0.23.0
golang.org/x/tools v0.24.0 // indirect
google.golang.org/api v0.169.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240604185151-ef581f913117 // indirect
@ -261,3 +267,5 @@ require (
gopkg.in/yaml.v3 v3.0.1 // indirect
lukechampine.com/blake3 v1.1.7 // indirect
)
// replace github.com/xhofe/115-sdk-go => ../../xhofe/115-sdk-go

36
go.sum
View File

@ -19,6 +19,12 @@ cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0 h1:g0EZJwz7xkXQiZAI5xi9f3WWFYBlX1CPTrR+NDToRkQ=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0/go.mod h1:XCW7KnZet0Opnr7HccfUw1PLc4CjHqpcaxW8DHklNkQ=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 h1:ywEEhmNahHBihViHepv3xPBn1663uRv2t2q/ESv9seY=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0/go.mod h1:iZDifYGJTIgIIkYRNWPENUnqx6bJ2xnSDFI2tjwZNuY=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.0 h1:UXT0o77lXQrikd1kgwIPQOUect7EoR/+sbP4wQKdzxM=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.0/go.mod h1:cTvi54pg19DoT07ekoeMgE/taAwNtCShVeZqA+Iv2xI=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
@ -606,6 +612,8 @@ github.com/winfsp/cgofuse v1.5.1-0.20230130140708-f87f5db493b5 h1:jxZvjx8Ve5sOXo
github.com/winfsp/cgofuse v1.5.1-0.20230130140708-f87f5db493b5/go.mod h1:uxjoF2jEYT3+x+vC2KJddEGdk/LU8pRowXmyVMHSV5I=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/xhofe/115-sdk-go v0.1.5 h1:2+E92l6AX0+ABAkrdmDa9PE5ONN7wVLCaKkK80zETOg=
github.com/xhofe/115-sdk-go v0.1.5/go.mod h1:MIdpe/4Kw4ODrPld7E11bANc4JsCuXcm5ZZBHSiOI0U=
github.com/xhofe/gsync v0.0.0-20230917091818-2111ceb38a25 h1:eDfebW/yfq9DtG9RO3KP7BT2dot2CvJGIvrB0NEoDXI=
github.com/xhofe/gsync v0.0.0-20230917091818-2111ceb38a25/go.mod h1:fH4oNm5F9NfI5dLi0oIMtsLNKQOirUDbEMCIBb/7SU0=
github.com/xhofe/tache v0.1.5 h1:ezDcgim7tj7KNMXliQsmf8BJQbaZtitfyQA9Nt+B4WM=
@ -663,8 +671,8 @@ golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliY
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.25.0/go.mod h1:T+wALwcMOSE0kXgUAnPAHqTLW+XHgcELELW8VaDgm/M=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@ -731,8 +739,10 @@ golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.27.0/go.mod h1:dDi0PyhWNoiUOrAS8uXv/vnScO4wnHQO4mj9fn/RytE=
golang.org/x/net v0.28.0 h1:a9JDOJc5GMUJ0+UDqmLT86WiEy7iWyIhz8gz8E4e5hE=
golang.org/x/net v0.28.0/go.mod h1:yqtgsTWOOnlGLG9GFRrK3++bGOUEkNBoHZc8MEDWPNg=
golang.org/x/net v0.37.0 h1:1zLorHbz+LYj7MQlSf1+2tPIIgibq2eL5xkrGk6f+2c=
golang.org/x/net v0.37.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -752,8 +762,8 @@ golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -793,8 +803,8 @@ golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.19.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
@ -807,8 +817,8 @@ golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.22.0/go.mod h1:F3qCibpT5AMpCRfhfT53vVJwhLtIVHhB9XDjfFvnMI4=
golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/term v0.30.0 h1:PQ39fJZ+mfadBm0y5WlL4vlM7Sx1Hgf13sMIY2+QS9Y=
golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -825,8 +835,8 @@ golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.6.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
@ -953,6 +963,8 @@ honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt
lukechampine.com/blake3 v1.1.7 h1:GgRMhmdsuK8+ii6UZFDL8Nb+VyMwadAgcJyfYHxG6n0=
lukechampine.com/blake3 v1.1.7/go.mod h1:tkKEOtDkNtklkXtLNEOGNq5tcV90tJiA1vAA12R78LA=
nullprogram.com/x/optparse v1.0.0/go.mod h1:KdyPE+Igbe0jQUrVfMqDMeJQIJZEuyV7pjYmp6pbG50=
resty.dev/v3 v3.0.0-beta.2 h1:xu4mGAdbCLuc3kbk7eddWfWm4JfhwDtdapwss5nCjnQ=
resty.dev/v3 v3.0.0-beta.2/go.mod h1:OgkqiPvTDtOuV4MGZuUDhwOpkY8enjOsjjMzeOHefy4=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=

View File

@ -3,5 +3,7 @@ package archive
import (
_ "github.com/alist-org/alist/v3/internal/archive/archives"
_ "github.com/alist-org/alist/v3/internal/archive/iso9660"
_ "github.com/alist-org/alist/v3/internal/archive/rardecode"
_ "github.com/alist-org/alist/v3/internal/archive/sevenzip"
_ "github.com/alist-org/alist/v3/internal/archive/zip"
)

View File

@ -16,14 +16,18 @@ import (
type Archives struct {
}
func (*Archives) AcceptedExtensions() []string {
func (Archives) AcceptedExtensions() []string {
return []string{
".br", ".bz2", ".gz", ".lz4", ".lz", ".sz", ".s2", ".xz", ".zz", ".zst", ".tar", ".rar", ".7z",
".br", ".bz2", ".gz", ".lz4", ".lz", ".sz", ".s2", ".xz", ".zz", ".zst", ".tar",
}
}
func (*Archives) GetMeta(ss *stream.SeekableStream, args model.ArchiveArgs) (model.ArchiveMeta, error) {
fsys, err := getFs(ss, args)
func (Archives) AcceptedMultipartExtensions() map[string]tool.MultipartExtension {
return map[string]tool.MultipartExtension{}
}
func (Archives) GetMeta(ss []*stream.SeekableStream, args model.ArchiveArgs) (model.ArchiveMeta, error) {
fsys, err := getFs(ss[0], args)
if err != nil {
return nil, err
}
@ -47,8 +51,8 @@ func (*Archives) GetMeta(ss *stream.SeekableStream, args model.ArchiveArgs) (mod
}, nil
}
func (*Archives) List(ss *stream.SeekableStream, args model.ArchiveInnerArgs) ([]model.Obj, error) {
fsys, err := getFs(ss, args.ArchiveArgs)
func (Archives) List(ss []*stream.SeekableStream, args model.ArchiveInnerArgs) ([]model.Obj, error) {
fsys, err := getFs(ss[0], args.ArchiveArgs)
if err != nil {
return nil, err
}
@ -69,8 +73,8 @@ func (*Archives) List(ss *stream.SeekableStream, args model.ArchiveInnerArgs) ([
})
}
func (*Archives) Extract(ss *stream.SeekableStream, args model.ArchiveInnerArgs) (io.ReadCloser, int64, error) {
fsys, err := getFs(ss, args.ArchiveArgs)
func (Archives) Extract(ss []*stream.SeekableStream, args model.ArchiveInnerArgs) (io.ReadCloser, int64, error) {
fsys, err := getFs(ss[0], args.ArchiveArgs)
if err != nil {
return nil, 0, err
}
@ -85,8 +89,8 @@ func (*Archives) Extract(ss *stream.SeekableStream, args model.ArchiveInnerArgs)
return file, stat.Size(), nil
}
func (*Archives) Decompress(ss *stream.SeekableStream, outputPath string, args model.ArchiveInnerArgs, up model.UpdateProgress) error {
fsys, err := getFs(ss, args.ArchiveArgs)
func (Archives) Decompress(ss []*stream.SeekableStream, outputPath string, args model.ArchiveInnerArgs, up model.UpdateProgress) error {
fsys, err := getFs(ss[0], args.ArchiveArgs)
if err != nil {
return err
}
@ -133,5 +137,5 @@ func (*Archives) Decompress(ss *stream.SeekableStream, outputPath string, args m
var _ tool.Tool = (*Archives)(nil)
func init() {
tool.RegisterTool(&Archives{})
tool.RegisterTool(Archives{})
}

View File

@ -10,6 +10,7 @@ import (
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/mholt/archives"
)
@ -73,7 +74,7 @@ func decompress(fsys fs2.FS, filePath, targetPath string, up model.UpdateProgres
return err
}
defer f.Close()
_, err = io.Copy(f, &stream.ReaderUpdatingProgress{
_, err = utils.CopyWithBuffer(f, &stream.ReaderUpdatingProgress{
Reader: &stream.SimpleReaderWithSize{
Reader: rc,
Size: stat.Size(),

View File

@ -14,19 +14,23 @@ import (
type ISO9660 struct {
}
func (t *ISO9660) AcceptedExtensions() []string {
func (ISO9660) AcceptedExtensions() []string {
return []string{".iso"}
}
func (t *ISO9660) GetMeta(ss *stream.SeekableStream, args model.ArchiveArgs) (model.ArchiveMeta, error) {
func (ISO9660) AcceptedMultipartExtensions() map[string]tool.MultipartExtension {
return map[string]tool.MultipartExtension{}
}
func (ISO9660) GetMeta(ss []*stream.SeekableStream, args model.ArchiveArgs) (model.ArchiveMeta, error) {
return &model.ArchiveMetaInfo{
Comment: "",
Encrypted: false,
}, nil
}
func (t *ISO9660) List(ss *stream.SeekableStream, args model.ArchiveInnerArgs) ([]model.Obj, error) {
img, err := getImage(ss)
func (ISO9660) List(ss []*stream.SeekableStream, args model.ArchiveInnerArgs) ([]model.Obj, error) {
img, err := getImage(ss[0])
if err != nil {
return nil, err
}
@ -48,8 +52,8 @@ func (t *ISO9660) List(ss *stream.SeekableStream, args model.ArchiveInnerArgs) (
return ret, nil
}
func (t *ISO9660) Extract(ss *stream.SeekableStream, args model.ArchiveInnerArgs) (io.ReadCloser, int64, error) {
img, err := getImage(ss)
func (ISO9660) Extract(ss []*stream.SeekableStream, args model.ArchiveInnerArgs) (io.ReadCloser, int64, error) {
img, err := getImage(ss[0])
if err != nil {
return nil, 0, err
}
@ -63,8 +67,8 @@ func (t *ISO9660) Extract(ss *stream.SeekableStream, args model.ArchiveInnerArgs
return io.NopCloser(obj.Reader()), obj.Size(), nil
}
func (t *ISO9660) Decompress(ss *stream.SeekableStream, outputPath string, args model.ArchiveInnerArgs, up model.UpdateProgress) error {
img, err := getImage(ss)
func (ISO9660) Decompress(ss []*stream.SeekableStream, outputPath string, args model.ArchiveInnerArgs, up model.UpdateProgress) error {
img, err := getImage(ss[0])
if err != nil {
return err
}
@ -92,5 +96,5 @@ func (t *ISO9660) Decompress(ss *stream.SeekableStream, outputPath string, args
var _ tool.Tool = (*ISO9660)(nil)
func init() {
tool.RegisterTool(&ISO9660{})
tool.RegisterTool(ISO9660{})
}

View File

@ -1,14 +1,15 @@
package iso9660
import (
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/kdomanski/iso9660"
"io"
"os"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/alist-org/alist/v3/pkg/utils"
"github.com/kdomanski/iso9660"
)
func getImage(ss *stream.SeekableStream) (*iso9660.Image, error) {
@ -66,7 +67,7 @@ func decompress(f *iso9660.File, path string, up model.UpdateProgress) error {
return err
}
defer file.Close()
_, err = io.Copy(file, &stream.ReaderUpdatingProgress{
_, err = utils.CopyWithBuffer(file, &stream.ReaderUpdatingProgress{
Reader: &stream.SimpleReaderWithSize{
Reader: f.Reader(),
Size: f.Size(),

View File

@ -0,0 +1,140 @@
package rardecode
import (
"github.com/alist-org/alist/v3/internal/archive/tool"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/nwaples/rardecode/v2"
"io"
"os"
stdpath "path"
"strings"
)
type RarDecoder struct{}
func (RarDecoder) AcceptedExtensions() []string {
return []string{".rar"}
}
func (RarDecoder) AcceptedMultipartExtensions() map[string]tool.MultipartExtension {
return map[string]tool.MultipartExtension{
".part1.rar": {".part%d.rar", 2},
}
}
func (RarDecoder) GetMeta(ss []*stream.SeekableStream, args model.ArchiveArgs) (model.ArchiveMeta, error) {
l, err := list(ss, args.Password)
if err != nil {
return nil, err
}
_, tree := tool.GenerateMetaTreeFromFolderTraversal(l)
return &model.ArchiveMetaInfo{
Comment: "",
Encrypted: false,
Tree: tree,
}, nil
}
func (RarDecoder) List(ss []*stream.SeekableStream, args model.ArchiveInnerArgs) ([]model.Obj, error) {
return nil, errs.NotSupport
}
func (RarDecoder) Extract(ss []*stream.SeekableStream, args model.ArchiveInnerArgs) (io.ReadCloser, int64, error) {
reader, err := getReader(ss, args.Password)
if err != nil {
return nil, 0, err
}
innerPath := strings.TrimPrefix(args.InnerPath, "/")
for {
var header *rardecode.FileHeader
header, err = reader.Next()
if err == io.EOF {
break
}
if err != nil {
return nil, 0, err
}
if header.Name == innerPath {
if header.IsDir {
break
}
return io.NopCloser(reader), header.UnPackedSize, nil
}
}
return nil, 0, errs.ObjectNotFound
}
func (RarDecoder) Decompress(ss []*stream.SeekableStream, outputPath string, args model.ArchiveInnerArgs, up model.UpdateProgress) error {
reader, err := getReader(ss, args.Password)
if err != nil {
return err
}
if args.InnerPath == "/" {
for {
var header *rardecode.FileHeader
header, err = reader.Next()
if err == io.EOF {
break
}
if err != nil {
return err
}
name := header.Name
if header.IsDir {
name = name + "/"
}
err = decompress(reader, header, name, outputPath)
if err != nil {
return err
}
}
} else {
innerPath := strings.TrimPrefix(args.InnerPath, "/")
innerBase := stdpath.Base(innerPath)
createdBaseDir := false
for {
var header *rardecode.FileHeader
header, err = reader.Next()
if err == io.EOF {
break
}
if err != nil {
return err
}
name := header.Name
if header.IsDir {
name = name + "/"
}
if name == innerPath {
err = _decompress(reader, header, outputPath, up)
if err != nil {
return err
}
break
} else if strings.HasPrefix(name, innerPath+"/") {
targetPath := stdpath.Join(outputPath, innerBase)
if !createdBaseDir {
err = os.Mkdir(targetPath, 0700)
if err != nil {
return err
}
createdBaseDir = true
}
restPath := strings.TrimPrefix(name, innerPath+"/")
err = decompress(reader, header, restPath, targetPath)
if err != nil {
return err
}
}
}
}
return nil
}
var _ tool.Tool = (*RarDecoder)(nil)
func init() {
tool.RegisterTool(RarDecoder{})
}

View File

@ -0,0 +1,225 @@
package rardecode
import (
"fmt"
"github.com/alist-org/alist/v3/internal/archive/tool"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/nwaples/rardecode/v2"
"io"
"io/fs"
"os"
stdpath "path"
"sort"
"strings"
"time"
)
type VolumeFile struct {
stream.SStreamReadAtSeeker
name string
}
func (v *VolumeFile) Name() string {
return v.name
}
func (v *VolumeFile) Size() int64 {
return v.SStreamReadAtSeeker.GetRawStream().GetSize()
}
func (v *VolumeFile) Mode() fs.FileMode {
return 0644
}
func (v *VolumeFile) ModTime() time.Time {
return v.SStreamReadAtSeeker.GetRawStream().ModTime()
}
func (v *VolumeFile) IsDir() bool {
return false
}
func (v *VolumeFile) Sys() any {
return nil
}
func (v *VolumeFile) Stat() (fs.FileInfo, error) {
return v, nil
}
func (v *VolumeFile) Close() error {
return nil
}
type VolumeFs struct {
parts map[string]*VolumeFile
}
func (v *VolumeFs) Open(name string) (fs.File, error) {
file, ok := v.parts[name]
if !ok {
return nil, fs.ErrNotExist
}
return file, nil
}
func makeOpts(ss []*stream.SeekableStream) (string, rardecode.Option, error) {
if len(ss) == 1 {
reader, err := stream.NewReadAtSeeker(ss[0], 0)
if err != nil {
return "", nil, err
}
fileName := "file.rar"
fsys := &VolumeFs{parts: map[string]*VolumeFile{
fileName: {SStreamReadAtSeeker: reader, name: fileName},
}}
return fileName, rardecode.FileSystem(fsys), nil
} else {
parts := make(map[string]*VolumeFile, len(ss))
for i, s := range ss {
reader, err := stream.NewReadAtSeeker(s, 0)
if err != nil {
return "", nil, err
}
fileName := fmt.Sprintf("file.part%d.rar", i+1)
parts[fileName] = &VolumeFile{SStreamReadAtSeeker: reader, name: fileName}
}
return "file.part1.rar", rardecode.FileSystem(&VolumeFs{parts: parts}), nil
}
}
type WrapReader struct {
files []*rardecode.File
}
func (r *WrapReader) Files() []tool.SubFile {
ret := make([]tool.SubFile, 0, len(r.files))
for _, f := range r.files {
ret = append(ret, &WrapFile{File: f})
}
return ret
}
type WrapFile struct {
*rardecode.File
}
func (f *WrapFile) Name() string {
if f.File.IsDir {
return f.File.Name + "/"
}
return f.File.Name
}
func (f *WrapFile) FileInfo() fs.FileInfo {
return &WrapFileInfo{File: f.File}
}
type WrapFileInfo struct {
*rardecode.File
}
func (f *WrapFileInfo) Name() string {
return stdpath.Base(f.File.Name)
}
func (f *WrapFileInfo) Size() int64 {
return f.File.UnPackedSize
}
func (f *WrapFileInfo) ModTime() time.Time {
return f.File.ModificationTime
}
func (f *WrapFileInfo) IsDir() bool {
return f.File.IsDir
}
func (f *WrapFileInfo) Sys() any {
return nil
}
func list(ss []*stream.SeekableStream, password string) (*WrapReader, error) {
fileName, fsOpt, err := makeOpts(ss)
if err != nil {
return nil, err
}
opts := []rardecode.Option{fsOpt}
if password != "" {
opts = append(opts, rardecode.Password(password))
}
files, err := rardecode.List(fileName, opts...)
// rardecode输出文件列表的顺序不一定是父目录在前子目录在后
// 父路径的长度一定比子路径短排序后的files可保证父路径在前
sort.Slice(files, func(i, j int) bool {
return len(files[i].Name) < len(files[j].Name)
})
if err != nil {
return nil, filterPassword(err)
}
return &WrapReader{files: files}, nil
}
func getReader(ss []*stream.SeekableStream, password string) (*rardecode.Reader, error) {
fileName, fsOpt, err := makeOpts(ss)
if err != nil {
return nil, err
}
opts := []rardecode.Option{fsOpt}
if password != "" {
opts = append(opts, rardecode.Password(password))
}
rc, err := rardecode.OpenReader(fileName, opts...)
if err != nil {
return nil, filterPassword(err)
}
ss[0].Closers.Add(rc)
return &rc.Reader, nil
}
func decompress(reader *rardecode.Reader, header *rardecode.FileHeader, filePath, outputPath string) error {
targetPath := outputPath
dir, base := stdpath.Split(filePath)
if dir != "" {
targetPath = stdpath.Join(targetPath, dir)
err := os.MkdirAll(targetPath, 0700)
if err != nil {
return err
}
}
if base != "" {
err := _decompress(reader, header, targetPath, func(_ float64) {})
if err != nil {
return err
}
}
return nil
}
func _decompress(reader *rardecode.Reader, header *rardecode.FileHeader, targetPath string, up model.UpdateProgress) error {
f, err := os.OpenFile(stdpath.Join(targetPath, stdpath.Base(header.Name)), os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0600)
if err != nil {
return err
}
defer func() { _ = f.Close() }()
_, err = io.Copy(f, &stream.ReaderUpdatingProgress{
Reader: &stream.SimpleReaderWithSize{
Reader: reader,
Size: header.UnPackedSize,
},
UpdateProgress: up,
})
if err != nil {
return err
}
return nil
}
func filterPassword(err error) error {
if err != nil && strings.Contains(err.Error(), "password") {
return errs.WrongArchivePassword
}
return err
}

View File

@ -0,0 +1,72 @@
package sevenzip
import (
"io"
"strings"
"github.com/alist-org/alist/v3/internal/archive/tool"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
)
type SevenZip struct{}
func (SevenZip) AcceptedExtensions() []string {
return []string{".7z"}
}
func (SevenZip) AcceptedMultipartExtensions() map[string]tool.MultipartExtension {
return map[string]tool.MultipartExtension{
".7z.001": {".7z.%.3d", 2},
}
}
func (SevenZip) GetMeta(ss []*stream.SeekableStream, args model.ArchiveArgs) (model.ArchiveMeta, error) {
reader, err := getReader(ss, args.Password)
if err != nil {
return nil, err
}
_, tree := tool.GenerateMetaTreeFromFolderTraversal(&WrapReader{Reader: reader})
return &model.ArchiveMetaInfo{
Comment: "",
Encrypted: args.Password != "",
Tree: tree,
}, nil
}
func (SevenZip) List(ss []*stream.SeekableStream, args model.ArchiveInnerArgs) ([]model.Obj, error) {
return nil, errs.NotSupport
}
func (SevenZip) Extract(ss []*stream.SeekableStream, args model.ArchiveInnerArgs) (io.ReadCloser, int64, error) {
reader, err := getReader(ss, args.Password)
if err != nil {
return nil, 0, err
}
innerPath := strings.TrimPrefix(args.InnerPath, "/")
for _, file := range reader.File {
if file.Name == innerPath {
r, e := file.Open()
if e != nil {
return nil, 0, e
}
return r, file.FileInfo().Size(), nil
}
}
return nil, 0, errs.ObjectNotFound
}
func (SevenZip) Decompress(ss []*stream.SeekableStream, outputPath string, args model.ArchiveInnerArgs, up model.UpdateProgress) error {
reader, err := getReader(ss, args.Password)
if err != nil {
return err
}
return tool.DecompressFromFolderTraversal(&WrapReader{Reader: reader}, outputPath, args, up)
}
var _ tool.Tool = (*SevenZip)(nil)
func init() {
tool.RegisterTool(SevenZip{})
}

View File

@ -0,0 +1,61 @@
package sevenzip
import (
"errors"
"github.com/alist-org/alist/v3/internal/archive/tool"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/bodgit/sevenzip"
"io"
"io/fs"
)
type WrapReader struct {
Reader *sevenzip.Reader
}
func (r *WrapReader) Files() []tool.SubFile {
ret := make([]tool.SubFile, 0, len(r.Reader.File))
for _, f := range r.Reader.File {
ret = append(ret, &WrapFile{f: f})
}
return ret
}
type WrapFile struct {
f *sevenzip.File
}
func (f *WrapFile) Name() string {
return f.f.Name
}
func (f *WrapFile) FileInfo() fs.FileInfo {
return f.f.FileInfo()
}
func (f *WrapFile) Open() (io.ReadCloser, error) {
return f.f.Open()
}
func getReader(ss []*stream.SeekableStream, password string) (*sevenzip.Reader, error) {
readerAt, err := stream.NewMultiReaderAt(ss)
if err != nil {
return nil, err
}
sr, err := sevenzip.NewReaderWithPassword(readerAt, readerAt.Size(), password)
if err != nil {
return nil, filterPassword(err)
}
return sr, nil
}
func filterPassword(err error) error {
if err != nil {
var e *sevenzip.ReadError
if errors.As(err, &e) && e.Encrypted {
return errs.WrongArchivePassword
}
}
return err
}

View File

@ -6,10 +6,16 @@ import (
"io"
)
type MultipartExtension struct {
PartFileFormat string
SecondPartIndex int
}
type Tool interface {
AcceptedExtensions() []string
GetMeta(ss *stream.SeekableStream, args model.ArchiveArgs) (model.ArchiveMeta, error)
List(ss *stream.SeekableStream, args model.ArchiveInnerArgs) ([]model.Obj, error)
Extract(ss *stream.SeekableStream, args model.ArchiveInnerArgs) (io.ReadCloser, int64, error)
Decompress(ss *stream.SeekableStream, outputPath string, args model.ArchiveInnerArgs, up model.UpdateProgress) error
AcceptedMultipartExtensions() map[string]MultipartExtension
GetMeta(ss []*stream.SeekableStream, args model.ArchiveArgs) (model.ArchiveMeta, error)
List(ss []*stream.SeekableStream, args model.ArchiveInnerArgs) ([]model.Obj, error)
Extract(ss []*stream.SeekableStream, args model.ArchiveInnerArgs) (io.ReadCloser, int64, error)
Decompress(ss []*stream.SeekableStream, outputPath string, args model.ArchiveInnerArgs, up model.UpdateProgress) error
}

View File

@ -0,0 +1,204 @@
package tool
import (
"io"
"io/fs"
"os"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
)
type SubFile interface {
Name() string
FileInfo() fs.FileInfo
Open() (io.ReadCloser, error)
}
type CanEncryptSubFile interface {
IsEncrypted() bool
SetPassword(password string)
}
type ArchiveReader interface {
Files() []SubFile
}
func GenerateMetaTreeFromFolderTraversal(r ArchiveReader) (bool, []model.ObjTree) {
encrypted := false
dirMap := make(map[string]*model.ObjectTree)
for _, file := range r.Files() {
if encrypt, ok := file.(CanEncryptSubFile); ok && encrypt.IsEncrypted() {
encrypted = true
}
name := strings.TrimPrefix(file.Name(), "/")
var dir string
var dirObj *model.ObjectTree
isNewFolder := false
if !file.FileInfo().IsDir() {
// 先将 文件 添加到 所在的文件夹
dir = stdpath.Dir(name)
dirObj = dirMap[dir]
if dirObj == nil {
isNewFolder = dir != "."
dirObj = &model.ObjectTree{}
dirObj.IsFolder = true
dirObj.Name = stdpath.Base(dir)
dirObj.Modified = file.FileInfo().ModTime()
dirMap[dir] = dirObj
}
dirObj.Children = append(
dirObj.Children, &model.ObjectTree{
Object: *MakeModelObj(file.FileInfo()),
},
)
} else {
dir = strings.TrimSuffix(name, "/")
dirObj = dirMap[dir]
if dirObj == nil {
isNewFolder = dir != "."
dirObj = &model.ObjectTree{}
dirMap[dir] = dirObj
}
dirObj.IsFolder = true
dirObj.Name = stdpath.Base(dir)
dirObj.Modified = file.FileInfo().ModTime()
}
if isNewFolder {
// 将 文件夹 添加到 父文件夹
// 考虑压缩包仅记录文件的路径,不记录文件夹
// 循环创建所有父文件夹
parentDir := stdpath.Dir(dir)
for {
parentDirObj := dirMap[parentDir]
if parentDirObj == nil {
parentDirObj = &model.ObjectTree{}
if parentDir != "." {
parentDirObj.IsFolder = true
parentDirObj.Name = stdpath.Base(parentDir)
parentDirObj.Modified = file.FileInfo().ModTime()
}
dirMap[parentDir] = parentDirObj
}
parentDirObj.Children = append(parentDirObj.Children, dirObj)
parentDir = stdpath.Dir(parentDir)
if dirMap[parentDir] != nil {
break
}
dirObj = parentDirObj
}
}
}
if len(dirMap) > 0 {
return encrypted, dirMap["."].GetChildren()
} else {
return encrypted, nil
}
}
func MakeModelObj(file os.FileInfo) *model.Object {
return &model.Object{
Name: file.Name(),
Size: file.Size(),
Modified: file.ModTime(),
IsFolder: file.IsDir(),
}
}
type WrapFileInfo struct {
model.Obj
}
func DecompressFromFolderTraversal(r ArchiveReader, outputPath string, args model.ArchiveInnerArgs, up model.UpdateProgress) error {
var err error
files := r.Files()
if args.InnerPath == "/" {
for i, file := range files {
name := file.Name()
err = decompress(file, name, outputPath, args.Password)
if err != nil {
return err
}
up(float64(i+1) * 100.0 / float64(len(files)))
}
} else {
innerPath := strings.TrimPrefix(args.InnerPath, "/")
innerBase := stdpath.Base(innerPath)
createdBaseDir := false
for _, file := range files {
name := file.Name()
if name == innerPath {
err = _decompress(file, outputPath, args.Password, up)
if err != nil {
return err
}
break
} else if strings.HasPrefix(name, innerPath+"/") {
targetPath := stdpath.Join(outputPath, innerBase)
if !createdBaseDir {
err = os.Mkdir(targetPath, 0700)
if err != nil {
return err
}
createdBaseDir = true
}
restPath := strings.TrimPrefix(name, innerPath+"/")
err = decompress(file, restPath, targetPath, args.Password)
if err != nil {
return err
}
}
}
}
return nil
}
func decompress(file SubFile, filePath, outputPath, password string) error {
targetPath := outputPath
dir, base := stdpath.Split(filePath)
if dir != "" {
targetPath = stdpath.Join(targetPath, dir)
err := os.MkdirAll(targetPath, 0700)
if err != nil {
return err
}
}
if base != "" {
err := _decompress(file, targetPath, password, func(_ float64) {})
if err != nil {
return err
}
}
return nil
}
func _decompress(file SubFile, targetPath, password string, up model.UpdateProgress) error {
if encrypt, ok := file.(CanEncryptSubFile); ok && encrypt.IsEncrypted() {
encrypt.SetPassword(password)
}
rc, err := file.Open()
if err != nil {
return err
}
defer func() { _ = rc.Close() }()
f, err := os.OpenFile(stdpath.Join(targetPath, file.FileInfo().Name()), os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0600)
if err != nil {
return err
}
defer func() { _ = f.Close() }()
_, err = io.Copy(f, &stream.ReaderUpdatingProgress{
Reader: &stream.SimpleReaderWithSize{
Reader: rc,
Size: file.FileInfo().Size(),
},
UpdateProgress: up,
})
if err != nil {
return err
}
return nil
}

View File

@ -5,19 +5,28 @@ import (
)
var (
Tools = make(map[string]Tool)
Tools = make(map[string]Tool)
MultipartExtensions = make(map[string]MultipartExtension)
)
func RegisterTool(tool Tool) {
for _, ext := range tool.AcceptedExtensions() {
Tools[ext] = tool
}
for mainFile, ext := range tool.AcceptedMultipartExtensions() {
MultipartExtensions[mainFile] = ext
Tools[mainFile] = tool
}
}
func GetArchiveTool(ext string) (Tool, error) {
func GetArchiveTool(ext string) (*MultipartExtension, Tool, error) {
t, ok := Tools[ext]
if !ok {
return nil, errs.UnknownArchiveFormat
return nil, nil, errs.UnknownArchiveFormat
}
return t, nil
partExt, ok := MultipartExtensions[ext]
if !ok {
return nil, t, nil
}
return &partExt, t, nil
}

View File

@ -2,8 +2,13 @@ package zip
import (
"bytes"
"io"
"io/fs"
stdpath "path"
"strings"
"github.com/alist-org/alist/v3/internal/archive/tool"
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/saintfish/chardet"
"github.com/yeka/zip"
@ -16,65 +21,62 @@ import (
"golang.org/x/text/encoding/unicode"
"golang.org/x/text/encoding/unicode/utf32"
"golang.org/x/text/transform"
"io"
"os"
stdpath "path"
"strings"
)
func toModelObj(file os.FileInfo) *model.Object {
return &model.Object{
Name: decodeName(file.Name()),
Size: file.Size(),
Modified: file.ModTime(),
IsFolder: file.IsDir(),
}
type WrapReader struct {
Reader *zip.Reader
}
func decompress(file *zip.File, filePath, outputPath, password string) error {
targetPath := outputPath
dir, base := stdpath.Split(filePath)
if dir != "" {
targetPath = stdpath.Join(targetPath, dir)
err := os.MkdirAll(targetPath, 0700)
if err != nil {
return err
}
func (r *WrapReader) Files() []tool.SubFile {
ret := make([]tool.SubFile, 0, len(r.Reader.File))
for _, f := range r.Reader.File {
ret = append(ret, &WrapFile{f: f})
}
if base != "" {
err := _decompress(file, targetPath, password, func(_ float64) {})
if err != nil {
return err
}
}
return nil
return ret
}
func _decompress(file *zip.File, targetPath, password string, up model.UpdateProgress) error {
if file.IsEncrypted() {
file.SetPassword(password)
type WrapFileInfo struct {
fs.FileInfo
}
func (f *WrapFileInfo) Name() string {
return decodeName(f.FileInfo.Name())
}
type WrapFile struct {
f *zip.File
}
func (f *WrapFile) Name() string {
return decodeName(f.f.Name)
}
func (f *WrapFile) FileInfo() fs.FileInfo {
return &WrapFileInfo{FileInfo: f.f.FileInfo()}
}
func (f *WrapFile) Open() (io.ReadCloser, error) {
return f.f.Open()
}
func (f *WrapFile) IsEncrypted() bool {
return f.f.IsEncrypted()
}
func (f *WrapFile) SetPassword(password string) {
f.f.SetPassword(password)
}
func getReader(ss []*stream.SeekableStream) (*zip.Reader, error) {
if len(ss) > 1 && stdpath.Ext(ss[1].GetName()) == ".z01" {
// FIXME: Incorrect parsing method for standard multipart zip format
ss = append(ss[1:], ss[0])
}
rc, err := file.Open()
reader, err := stream.NewMultiReaderAt(ss)
if err != nil {
return err
return nil, err
}
defer rc.Close()
f, err := os.OpenFile(stdpath.Join(targetPath, decodeName(file.FileInfo().Name())), os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0600)
if err != nil {
return err
}
defer f.Close()
_, err = io.Copy(f, &stream.ReaderUpdatingProgress{
Reader: &stream.SimpleReaderWithSize{
Reader: rc,
Size: file.FileInfo().Size(),
},
UpdateProgress: up,
})
if err != nil {
return err
}
return nil
return zip.NewReader(reader, reader.Size())
}
func filterPassword(err error) error {

View File

@ -2,7 +2,6 @@ package zip
import (
"io"
"os"
stdpath "path"
"strings"
@ -10,106 +9,37 @@ import (
"github.com/alist-org/alist/v3/internal/errs"
"github.com/alist-org/alist/v3/internal/model"
"github.com/alist-org/alist/v3/internal/stream"
"github.com/yeka/zip"
)
type Zip struct {
}
func (*Zip) AcceptedExtensions() []string {
return []string{".zip"}
func (Zip) AcceptedExtensions() []string {
return []string{}
}
func (*Zip) GetMeta(ss *stream.SeekableStream, args model.ArchiveArgs) (model.ArchiveMeta, error) {
reader, err := stream.NewReadAtSeeker(ss, 0)
func (Zip) AcceptedMultipartExtensions() map[string]tool.MultipartExtension {
return map[string]tool.MultipartExtension{
".zip": {".z%.2d", 1},
".zip.001": {".zip.%.3d", 2},
}
}
func (Zip) GetMeta(ss []*stream.SeekableStream, args model.ArchiveArgs) (model.ArchiveMeta, error) {
zipReader, err := getReader(ss)
if err != nil {
return nil, err
}
zipReader, err := zip.NewReader(reader, ss.GetSize())
if err != nil {
return nil, err
}
encrypted := false
dirMap := make(map[string]*model.ObjectTree)
dirMap["."] = &model.ObjectTree{}
for _, file := range zipReader.File {
if file.IsEncrypted() {
encrypted = true
}
name := strings.TrimPrefix(decodeName(file.Name), "/")
var dir string
var dirObj *model.ObjectTree
isNewFolder := false
if !file.FileInfo().IsDir() {
// 先将 文件 添加到 所在的文件夹
dir = stdpath.Dir(name)
dirObj = dirMap[dir]
if dirObj == nil {
isNewFolder = true
dirObj = &model.ObjectTree{}
dirObj.IsFolder = true
dirObj.Name = stdpath.Base(dir)
dirObj.Modified = file.ModTime()
dirMap[dir] = dirObj
}
dirObj.Children = append(
dirObj.Children, &model.ObjectTree{
Object: *toModelObj(file.FileInfo()),
},
)
} else {
dir = strings.TrimSuffix(name, "/")
dirObj = dirMap[dir]
if dirObj == nil {
isNewFolder = true
dirObj = &model.ObjectTree{}
dirMap[dir] = dirObj
}
dirObj.IsFolder = true
dirObj.Name = stdpath.Base(dir)
dirObj.Modified = file.ModTime()
dirObj.Children = make([]model.ObjTree, 0)
}
if isNewFolder {
// 将 文件夹 添加到 父文件夹
dir = stdpath.Dir(dir)
pDirObj := dirMap[dir]
if pDirObj != nil {
pDirObj.Children = append(pDirObj.Children, dirObj)
continue
}
for {
// 考虑压缩包仅记录文件的路径,不记录文件夹
pDirObj = &model.ObjectTree{}
pDirObj.IsFolder = true
pDirObj.Name = stdpath.Base(dir)
pDirObj.Modified = file.ModTime()
dirMap[dir] = pDirObj
pDirObj.Children = append(pDirObj.Children, dirObj)
dir = stdpath.Dir(dir)
if dirMap[dir] != nil {
break
}
dirObj = pDirObj
}
}
}
encrypted, tree := tool.GenerateMetaTreeFromFolderTraversal(&WrapReader{Reader: zipReader})
return &model.ArchiveMetaInfo{
Comment: zipReader.Comment,
Encrypted: encrypted,
Tree: dirMap["."].GetChildren(),
Tree: tree,
}, nil
}
func (*Zip) List(ss *stream.SeekableStream, args model.ArchiveInnerArgs) ([]model.Obj, error) {
reader, err := stream.NewReadAtSeeker(ss, 0)
if err != nil {
return nil, err
}
zipReader, err := zip.NewReader(reader, ss.GetSize())
func (Zip) List(ss []*stream.SeekableStream, args model.ArchiveInnerArgs) ([]model.Obj, error) {
zipReader, err := getReader(ss)
if err != nil {
return nil, err
}
@ -134,13 +64,13 @@ func (*Zip) List(ss *stream.SeekableStream, args model.ArchiveInnerArgs) ([]mode
if dir == nil && len(strs) == 2 {
dir = &model.Object{
Name: strs[0],
Modified: ss.ModTime(),
Modified: ss[0].ModTime(),
IsFolder: true,
}
}
continue
}
ret = append(ret, toModelObj(file.FileInfo()))
ret = append(ret, tool.MakeModelObj(&WrapFileInfo{FileInfo: file.FileInfo()}))
}
if len(ret) == 0 && dir != nil {
ret = append(ret, dir)
@ -157,7 +87,7 @@ func (*Zip) List(ss *stream.SeekableStream, args model.ArchiveInnerArgs) ([]mode
continue
}
exist = true
ret = append(ret, toModelObj(file.FileInfo()))
ret = append(ret, tool.MakeModelObj(&WrapFileInfo{file.FileInfo()}))
}
if !exist {
return nil, errs.ObjectNotFound
@ -166,12 +96,8 @@ func (*Zip) List(ss *stream.SeekableStream, args model.ArchiveInnerArgs) ([]mode
}
}
func (*Zip) Extract(ss *stream.SeekableStream, args model.ArchiveInnerArgs) (io.ReadCloser, int64, error) {
reader, err := stream.NewReadAtSeeker(ss, 0)
if err != nil {
return nil, 0, err
}
zipReader, err := zip.NewReader(reader, ss.GetSize())
func (Zip) Extract(ss []*stream.SeekableStream, args model.ArchiveInnerArgs) (io.ReadCloser, int64, error) {
zipReader, err := getReader(ss)
if err != nil {
return nil, 0, err
}
@ -191,58 +117,16 @@ func (*Zip) Extract(ss *stream.SeekableStream, args model.ArchiveInnerArgs) (io.
return nil, 0, errs.ObjectNotFound
}
func (*Zip) Decompress(ss *stream.SeekableStream, outputPath string, args model.ArchiveInnerArgs, up model.UpdateProgress) error {
reader, err := stream.NewReadAtSeeker(ss, 0)
func (Zip) Decompress(ss []*stream.SeekableStream, outputPath string, args model.ArchiveInnerArgs, up model.UpdateProgress) error {
zipReader, err := getReader(ss)
if err != nil {
return err
}
zipReader, err := zip.NewReader(reader, ss.GetSize())
if err != nil {
return err
}
if args.InnerPath == "/" {
for i, file := range zipReader.File {
name := decodeName(file.Name)
err = decompress(file, name, outputPath, args.Password)
if err != nil {
return err
}
up(float64(i+1) * 100.0 / float64(len(zipReader.File)))
}
} else {
innerPath := strings.TrimPrefix(args.InnerPath, "/")
innerBase := stdpath.Base(innerPath)
createdBaseDir := false
for _, file := range zipReader.File {
name := decodeName(file.Name)
if name == innerPath {
err = _decompress(file, outputPath, args.Password, up)
if err != nil {
return err
}
break
} else if strings.HasPrefix(name, innerPath+"/") {
targetPath := stdpath.Join(outputPath, innerBase)
if !createdBaseDir {
err = os.Mkdir(targetPath, 0700)
if err != nil {
return err
}
createdBaseDir = true
}
restPath := strings.TrimPrefix(name, innerPath+"/")
err = decompress(file, restPath, targetPath, args.Password)
if err != nil {
return err
}
}
}
}
return nil
return tool.DecompressFromFolderTraversal(&WrapReader{Reader: zipReader}, outputPath, args, up)
}
var _ tool.Tool = (*Zip)(nil)
func init() {
tool.RegisterTool(&Zip{})
tool.RegisterTool(Zip{})
}

Some files were not shown because too many files have changed in this diff Show More