{
  "base_url": "https://talk.nervos.org",
  "generated_at": "2026-05-06T18:15:15.665309+00:00",
  "since": "2026-05-05T18:15:08.666956+00:00",
  "until": "2026-05-06T18:15:08.666956+00:00",
  "window_hours": 24,
  "topics": [
    {
      "topic_id": 9995,
      "title": "Spark Program | Nervos Brain - A Global Developer Onboarding Engine and Cross-Language Hub Powered by Agentic RAG",
      "slug": "spark-program-nervos-brain-a-global-developer-onboarding-engine-and-cross-language-hub-powered-by-agentic-rag",
      "url": "https://talk.nervos.org/t/spark-program-nervos-brain-a-global-developer-onboarding-engine-and-cross-language-hub-powered-by-agentic-rag/9995",
      "created_at": "2026-02-25T09:58:43.726000+00:00",
      "last_posted_at": "2026-05-06T18:04:43.795000+00:00",
      "category_id": 49,
      "tags": [
        "In-Progress",
        "Spark-Program"
      ],
      "posters": [
        "Original Poster, Most Recent Poster",
        "Frequent Poster",
        "Frequent Poster",
        "Frequent Poster"
      ],
      "recent_posts": [
        {
          "post_id": 24151,
          "post_number": 34,
          "topic_id": 9995,
          "topic_title": "Spark Program | Nervos Brain - A Global Developer Onboarding Engine and Cross-Language Hub Powered by Agentic RAG",
          "topic_slug": "spark-program-nervos-brain-a-global-developer-onboarding-engine-and-cross-language-hub-powered-by-agentic-rag",
          "author": "IrisNeko",
          "created_at": "2026-05-06T18:04:43.795000+00:00",
          "updated_at": "2026-05-06T18:04:43.795000+00:00",
          "reply_to_post_number": null,
          "url": "https://talk.nervos.org/t/spark-program-nervos-brain-a-global-developer-onboarding-engine-and-cross-language-hub-powered-by-agentic-rag/9995/34",
          "content_text": "第八周周报\n一、本周目标\n本周重点从离线评测和工具闭环，转向 Telegram 群内测、线上部署和真实使用反馈。目标不只是让 Bot 能在测试群里回答问题，而是观察它在真实群聊环境中的稳定性：是否会误触发、是否能隔离不同用户上下文、是否能正确检索官方文档和论坛资料、是否会过度追问，以及线上异常能否通过日志快速定位并修复。\n本周关注的核心问题包括：\n群聊中不同用户的上下文是否会互相串扰。\nBot 是否会错误插入普通群聊。\n用户连续追问时，系统是否能正确理解上文。\n检索结果、引用和 source 选择是否稳定。\nAgent 是否存在过度追问、答非所问、格式异常或无限重试等影响体验的问题。\n多用户同时提问时，Telegram runtime 是否能并发处理。\n用户上传文本文件或图片时，系统是否能把附件纳入回答上下文。\n线上问题是否能通过日志、debug event 和回归测试快速定位。\n二、本周完成\n2.1 Telegram 群内测与真实反馈闭环\n本周开始在 Telegram 群中进行真实内测。测试不再只依赖人工构造的离线样例，而是让用户围绕 CKB、Nervos、Fiber、CCC、Agent 钱包、链上应用、游戏项目、新手学习路线和官方资料等主题进行自然提问和连续追问。\n真实群测暴露出的行为比离线样例复杂得多：用户会用“它”“这个”“继续说”“有没有靠谱资料”“官方没有教程吗”这类短句承接上文；不同用户会在同一个群里交叉提问；用户也会直接纠正 Bot“你是不是回复错问题了”。这些反馈帮助我们把问题从“可能发生”变成“有日志、有复现、有测试”的工程闭环。\n2.2 群内用户上下文隔离与触发策略\n本周完成了 Telegram 群内短期记忆隔离改造，将上下文粒度调整为“群/频道 + 用户”。系统默认读取同一用户在同一群内最近 20 条与 Bot 相关的消息，并注入 full graph 作为短期上下文。\n这样可以避免同一个群里 A 用户和 B 用户的对话互相污染，同时支持“我刚才问了什么”“继续说”“看看上文”这类短期追问。AskUser 的 checkpoint 恢复逻辑也同步改成按用户隔离，避免 B 用户误恢复 A 用户的补参流程。\n群聊触发策略也进行了修复：\n私聊直接响应。\n群聊中只有被 @、被回复，或收到 Bot 命令时才响应。\n普通群聊直接忽略，不发送 typing，不写入 memory，也不进入 graph。\n这让 Bot 在群内的行为更自然：用户需要它时可以明确唤起，不需要时它保持安静。\n2.3 多用户并发处理\n内测中确认了一个重要运行时问题：如果 Telegram polling 串行处理所有 update，不同用户同时提问时会互相排队，复杂问题会明显拖慢其他用户。\n本周将 Telegram polling gateway 改成多线程并发处理：\n不同用户/会话可以并发进入 graph。\n同一个 chat + user 的消息仍按顺序处理，避免同一用户连续追问乱序。\nTelegram API 发送、debug event 写入和 feedback 写入增加锁保护。\nSQLite memory 连接配置调整为允许多线程访问。\n这部分没有采用多进程，避免引入额外的进程间状态同步和本地 Qdrant/SQLite 资源竞争。\n2.4 线上无限调用与 Telegram 发送失败修复\n服务器重启后曾出现一次严重线上问题：Agent 不断调用模型 API，但用户一直收不到回复。定位后发现根因不是模型，也不是网络，而是 Telegram sendMessage 返回 400 后异常向外抛出，导致当前 update 的 offset 没有推进。polling 下一轮又拿到同一个 update，于是同一条消息被反复处理，造成模型 API 被重复调用。\n本周修复为：\nsendMessage 失败后先从 MarkdownV2 降级为纯文本重试。\n如果 reply 发送失败，再去掉 reply_to_message_id 作为普通消息重试。\nprocess_update 失败也不阻塞 offset 推进，避免同一 update 无限重放。\n补充发送失败、offset 推进和 fallback 发送的回归测试。\n这项修复显著降低了线上 Bot 因 Telegram 格式或 reply 状态异常导致循环消耗模型调用的风险。\n2.5 多库检索与检索性能修复\n本周将 GitHub 文档库和 Nervos Talk 论坛数据接入当前 runtime，形成多库检索能力。系统启动时会加载多个 retrieval backend，并在日志中打印已加载的后端，方便确认线上运行环境是否接入了论坛库和文档库。\n同时修复了论坛检索的超时问题。此前 discourse_query 会在大库上重建或扫描过多候选，导致线上工具调用超过 10 秒并被取消。修复后改为先通过 SQLite 做更小范围的候选预筛，再进行排序，从而让论坛检索能在服务器配置下稳定返回。\n工具调用默认超时时间也从 10 秒延长到 60 秒，避免慢但有效的检索被过早取消。\n2.6 检索路由、source registry 与 qdrant fallback\n真实对话中暴露出一个新的检索路由问题：用户问“有没有比较靠谱的资料可以看？”或“官方没有比较好的教程吗？”时，LLM 会自然生成 filters.source=official_docs，但当前数据库中官方文档实际 source 是 github_docs。这导致 qdrant_search 很快返回空结果，然后系统给出“知识库没有足够证据”的错误回复。\n本周对此做了更稳的架构调整：\n新增 retrieval source registry，统一定义当前合法 source。\nplanner prompt 注入合法 source 和每个 source 的内容范围。\n运行时对 LLM 生成的 filters 做 normalize 和 validation。\n将 official_docs、docs、documentation 等别名映射到 github_docs。\n对带 filter 的 qdrant_search 空结果，在预算允许时自动去掉 filter 重试一次。\n广泛资料类问题优先走快速 vector search，避免无过滤条件下进入 BM25/fuzzy/exact 慢路径。\n当前实际可用 source 包括：\ngithub_docs：官方文档、docs.nervos.org、RFC、CKB/CCC/Fiber 仓库文档、SDK 文档和代码示例。\nnervos_talk：Nervos Talk 论坛帖子、社区讨论、Spark/grant/proposal、生态项目介绍和真实案例。\n这让 source 选择不再依赖模型猜测，也避免了模型生成不存在的 source 后直接导致检索为空。\n2.7 Prompt 与 ask_user 路由修复\n本周对 full graph 中的 InfoGap、RetrieverPlanner、Reflection、DirectAnswer、AnswerComposer 等 prompt 进行了集中调整。\n调优重点不是增加大量关键词规则，而是让 LLM 更清楚地区分：\n哪些信息需要问用户；\n哪些信息属于公开资料，应由系统检索；\n什么时候应该基于合理默认假设继续回答；\n什么时候应该说明证据边界，而不是泛泛拒答；\n什么时候可以直接利用上文回答，而不是重新检索或重复追问；\n用户在纠正 Bot 答非所问时，应直接道歉并重新聚焦，而不是继续检索旧问题。\n同时，路由层增加了保护逻辑：\nask_user 不再输出硬编码兜底回复。\n如果没有真正的用户私有必填信息，graph 不应进入 ask_user。\npost-answer reflection 如果把公开资料缺口错误路由到 ask_user，会被改路由到继续检索、改写或回答。\n“你是不是回复错问题了”“答非所问”“不是这个问题”这类质量反馈会直接进入 direct answer，不触发资料检索。\n这部分修复了“靠谱资料”追问被错误回复成 Fiber/testnet 默认假设，以及用户纠错后反而收到“知识库证据不足”的问题。\n2.8 Telegram 文件与图片输入支持\n本周补充了 Telegram 附件处理能力。此前系统只能识别用户上传了文件或图片，但不会下载，也不会把内容交给模型。\n现在支持：\n文本类文件下载和内容注入，包括 .txt、.md、.json、.yaml、.csv、.log、常见代码文件和配置文件。\n图片下载后保存本地路径，并在回答生成阶段作为 image input 传给支持多模态的 LLM。\n对附件设置大小限制：文本文件 256KB，图片 20MB。\n明确不支持 PDF，避免引入复杂解析和不稳定依赖。\n这让用户可以在 Telegram 中上传日志、代码片段、配置文件或截图，让 Bot 直接基于附件内容回答。\n2.9 Debug 日志与部署支持\n为了支持群内测问题定位，本周补充了更完整的 debug 信息，包括节点耗时、LLM 调用摘要、检索证据数量、tool trace、reflection 决策、route decision、终止证据不足原因等。\n同时补充了服务器部署相关支持：\n增加 Linux 版 Telegram Bot 重启脚本。\n调整 environment.yml，去掉 Windows 专属依赖，改成跨平台最小环境。\n整理 .gitignore，避免提交真实配置、日志、群聊记录、周报、个人文档和缓存文件。\n清理 GitHub 仓库历史，确保公开仓库不包含敏感信息和本地资料。\n通过日志确认线上 bot 进程启动时间、加载后端和实际请求链路。\n三、Telegram 群内测反馈\n本周内测的主要价值，是让系统问题从“离线样例中可能发生”变成“真实用户已经遇到”。几个比较典型的反馈包括：\n用户希望 Bot 能理解连续追问，而不是每轮都像重新开始。\n用户不希望 Bot 插入普通群聊。\n当用户说“我是萌新，你自己决定”时，Bot 应该主动给方案，而不是继续追问。\n当用户问“有没有靠谱资料”“官方有没有教程”时，Bot 应该主动查官方文档，而不是给知识库证据不足。\n当用户说“你是不是回复错问题了”时，Bot 应该识别这是对回答质量的反馈，而不是继续沿着旧问题检索。\n技术回答需要稳定引用资料来源，否则用户难以判断可信度。\n对代码示例类问题，即使证据不完整，也应给出清晰的骨架和边界说明。\n响应时间需要继续优化，尤其是复杂问题经过多节点 graph 后等待时间较长。\n用户希望可以上传文件、日志或截图，让 Bot 直接分析。\n这些反馈说明，当前系统的核心挑战已经不只是“能不能回答”，而是“是否像一个可靠、不过度打扰、能持续理解上下文、能使用资料和附件的群内助手”。\n四、测试与验证\n本周围绕 Telegram 群内测暴露的问题补充和运行了多组回归测试，重点覆盖：\n群内同用户上下文读取。\n不同用户记忆隔离。\nAskUser checkpoint 按用户恢复。\n群聊 mention-only 触发策略。\n未触发消息不写入 memory。\n多线程并发处理与同一用户消息顺序保证。\nTelegram sendMessage Markdown/plain/detached fallback。\noffset 在异常场景下仍推进，避免同一 update 无限重放。\n多库检索、论坛检索性能和工具 60 秒超时。\nqdrant_search broad resource 查询优先 vector path。\nsource registry 注入、source alias 规范化和空结果 fallback。\nresponse-quality feedback 直接回复，不进入检索。\npost-answer ask_user 防护和 ask_user 硬编码兜底移除。\nTelegram 文本文件下载、内容注入和图片作为 LLM image input。\nCSAT、feedback、BadCase 逻辑回归。\n阶段性相关回归测试结果为：\n192 passed, 2 warnings\n后续需要把本周真实群测 bad case 继续沉淀为标准评测样例，避免类似问题再次回归。\n五、阶段性成果\n本周完成后，系统从“群内可测”进一步推进到“线上可诊断、可修复、可回归”。主要成果包括：\nTelegram 群内测开始形成真实反馈闭环。\n群内上下文实现按用户隔离。\nBot 在群聊中默认只响应明确触发，减少打扰。\n不同用户问题可以多线程并发处理。\nGitHub 文档库和 Nervos Talk 论坛库进入 runtime 检索路径。\n检索 source registry 和 filter 校验让资料检索更稳定。\n过度追问、回答滞后、答非所问、引用缺失、Markdown 异常等真实问题得到系统修复。\nTelegram 发送失败不再导致 update 重放和模型 API 无限调用。\nTelegram 文本文件和图片开始进入模型上下文。\nDebug 日志和部署脚本更适合线上排查。\n六、当前问题\n响应延迟仍需要优化。复杂问题经过多节点 graph、检索、反思和回答生成后，耗时仍偏长。\nPrompt 仍需要根据真实内测继续调优，特别是“主动给默认方案”和“避免编造”之间的平衡。\n引用稳定性还需要继续观察，尤其是连续追问、资料链接和代码示例场景。\nsource registry 目前是静态配置，后续最好从实际 archive metadata 自动生成或定期校验。\n附件支持目前覆盖文本文件和图片，暂不支持 PDF、音频转写和复杂 Office 文档。\n内测反馈还没有形成结构化问卷和统计指标，目前仍偏人工观察。\n部署流程还可以继续标准化，例如补充 systemd、健康检查和日志轮转。\n七、下周计划（Week 9）\n下周重点不再是继续堆功能，而是围绕 Telegram 群内测反馈做系统调优、用户评估设计和线上运行稳定性提升。\n整理本周内测反馈和 bad case。\n将过度追问、回答滞后、引用缺失、上下文误用、检索为空、答非所问、发送失败等问题归类，形成可复盘的 bad case 列表。\n根据内测反馈继续调优 prompt 和 graph 行为。\n重点优化小白问题、连续追问、代码示例、公开资料检索、默认假设选择和证据边界说明。\n完善 source registry 和检索 metadata。\n让 planner 更稳定地理解 github_docs、nervos_talk 等 source 的内容范围，并考虑从 archive metadata 自动生成 source 提示，减少手工维护。\n制定 Telegram 内测问卷。\n设计一份简短问卷，用于收集用户对回答质量、上下文理解、响应速度、引用可信度、追问体验、附件分析能力和整体可用性的主观评价。\n建立内测反馈指标。\n初步统计满意度、常见失败类型、用户是否需要重复解释上下文、是否认为 Bot 打扰群聊、是否愿意继续使用等指标。\n将高价值内测样例加入 evaluation dataset。\n把真实群测中暴露出的典型问题转成可重复运行的评测样例，形成“内测反馈 → 测试集 → 回归验证”的闭环。\n继续优化响应延迟。\n基于 node timings 和 LLM trace 分析耗时瓶颈，优先优化简单问题、常见追问和资料类查询的快答路径。\n完善服务器部署与运行监控。\n在现有重启脚本基础上，补充更稳定的守护方式和日志观察流程，降低内测期间人工维护成本。",
          "content_html": "<h1><a name=\"p-24151-h-1\" class=\"anchor\" href=\"#p-24151-h-1\" aria-label=\"Heading link\"></a>第八周周报</h1>\n<h2><a name=\"p-24151-h-2\" class=\"anchor\" href=\"#p-24151-h-2\" aria-label=\"Heading link\"></a>一、本周目标</h2>\n<p>本周重点从离线评测和工具闭环，转向 Telegram 群内测、线上部署和真实使用反馈。目标不只是让 Bot 能在测试群里回答问题，而是观察它在真实群聊环境中的稳定性：是否会误触发、是否能隔离不同用户上下文、是否能正确检索官方文档和论坛资料、是否会过度追问，以及线上异常能否通过日志快速定位并修复。</p>\n<p>本周关注的核心问题包括：</p>\n<ol>\n<li>\n<p>群聊中不同用户的上下文是否会互相串扰。</p>\n</li>\n<li>\n<p>Bot 是否会错误插入普通群聊。</p>\n</li>\n<li>\n<p>用户连续追问时，系统是否能正确理解上文。</p>\n</li>\n<li>\n<p>检索结果、引用和 source 选择是否稳定。</p>\n</li>\n<li>\n<p>Agent 是否存在过度追问、答非所问、格式异常或无限重试等影响体验的问题。</p>\n</li>\n<li>\n<p>多用户同时提问时，Telegram runtime 是否能并发处理。</p>\n</li>\n<li>\n<p>用户上传文本文件或图片时，系统是否能把附件纳入回答上下文。</p>\n</li>\n<li>\n<p>线上问题是否能通过日志、debug event 和回归测试快速定位。</p>\n</li>\n</ol>\n<h2><a name=\"p-24151-h-3\" class=\"anchor\" href=\"#p-24151-h-3\" aria-label=\"Heading link\"></a>二、本周完成</h2>\n<h3><a name=\"p-24151-h-21-telegram-4\" class=\"anchor\" href=\"#p-24151-h-21-telegram-4\" aria-label=\"Heading link\"></a>2.1 Telegram 群内测与真实反馈闭环</h3>\n<p>本周开始在 Telegram 群中进行真实内测。测试不再只依赖人工构造的离线样例，而是让用户围绕 CKB、Nervos、Fiber、CCC、Agent 钱包、链上应用、游戏项目、新手学习路线和官方资料等主题进行自然提问和连续追问。</p>\n<p>真实群测暴露出的行为比离线样例复杂得多：用户会用“它”“这个”“继续说”“有没有靠谱资料”“官方没有教程吗”这类短句承接上文；不同用户会在同一个群里交叉提问；用户也会直接纠正 Bot“你是不是回复错问题了”。这些反馈帮助我们把问题从“可能发生”变成“有日志、有复现、有测试”的工程闭环。</p>\n<h3><a name=\"p-24151-h-22-5\" class=\"anchor\" href=\"#p-24151-h-22-5\" aria-label=\"Heading link\"></a>2.2 群内用户上下文隔离与触发策略</h3>\n<p>本周完成了 Telegram 群内短期记忆隔离改造，将上下文粒度调整为“群/频道 + 用户”。系统默认读取同一用户在同一群内最近 20 条与 Bot 相关的消息，并注入 full graph 作为短期上下文。</p>\n<p>这样可以避免同一个群里 A 用户和 B 用户的对话互相污染，同时支持“我刚才问了什么”“继续说”“看看上文”这类短期追问。AskUser 的 checkpoint 恢复逻辑也同步改成按用户隔离，避免 B 用户误恢复 A 用户的补参流程。</p>\n<p>群聊触发策略也进行了修复：</p>\n<ol>\n<li>\n<p>私聊直接响应。</p>\n</li>\n<li>\n<p>群聊中只有被 @、被回复，或收到 Bot 命令时才响应。</p>\n</li>\n<li>\n<p>普通群聊直接忽略，不发送 typing，不写入 memory，也不进入 graph。</p>\n</li>\n</ol>\n<p>这让 Bot 在群内的行为更自然：用户需要它时可以明确唤起，不需要时它保持安静。</p>\n<h3><a name=\"p-24151-h-23-6\" class=\"anchor\" href=\"#p-24151-h-23-6\" aria-label=\"Heading link\"></a>2.3 多用户并发处理</h3>\n<p>内测中确认了一个重要运行时问题：如果 Telegram polling 串行处理所有 update，不同用户同时提问时会互相排队，复杂问题会明显拖慢其他用户。</p>\n<p>本周将 Telegram polling gateway 改成多线程并发处理：</p>\n<ol>\n<li>\n<p>不同用户/会话可以并发进入 graph。</p>\n</li>\n<li>\n<p>同一个 chat + user 的消息仍按顺序处理，避免同一用户连续追问乱序。</p>\n</li>\n<li>\n<p>Telegram API 发送、debug event 写入和 feedback 写入增加锁保护。</p>\n</li>\n<li>\n<p>SQLite memory 连接配置调整为允许多线程访问。</p>\n</li>\n</ol>\n<p>这部分没有采用多进程，避免引入额外的进程间状态同步和本地 Qdrant/SQLite 资源竞争。</p>\n<h3><a name=\"p-24151-h-24-telegram-7\" class=\"anchor\" href=\"#p-24151-h-24-telegram-7\" aria-label=\"Heading link\"></a>2.4 线上无限调用与 Telegram 发送失败修复</h3>\n<p>服务器重启后曾出现一次严重线上问题：Agent 不断调用模型 API，但用户一直收不到回复。定位后发现根因不是模型，也不是网络，而是 Telegram <code>sendMessage</code> 返回 400 后异常向外抛出，导致当前 update 的 offset 没有推进。polling 下一轮又拿到同一个 update，于是同一条消息被反复处理，造成模型 API 被重复调用。</p>\n<p>本周修复为：</p>\n<ol>\n<li>\n<p><code>sendMessage</code> 失败后先从 MarkdownV2 降级为纯文本重试。</p>\n</li>\n<li>\n<p>如果 reply 发送失败，再去掉 <code>reply_to_message_id</code> 作为普通消息重试。</p>\n</li>\n<li>\n<p><code>process_update</code> 失败也不阻塞 offset 推进，避免同一 update 无限重放。</p>\n</li>\n<li>\n<p>补充发送失败、offset 推进和 fallback 发送的回归测试。</p>\n</li>\n</ol>\n<p>这项修复显著降低了线上 Bot 因 Telegram 格式或 reply 状态异常导致循环消耗模型调用的风险。</p>\n<h3><a name=\"p-24151-h-25-8\" class=\"anchor\" href=\"#p-24151-h-25-8\" aria-label=\"Heading link\"></a>2.5 多库检索与检索性能修复</h3>\n<p>本周将 GitHub 文档库和 Nervos Talk 论坛数据接入当前 runtime，形成多库检索能力。系统启动时会加载多个 retrieval backend，并在日志中打印已加载的后端，方便确认线上运行环境是否接入了论坛库和文档库。</p>\n<p>同时修复了论坛检索的超时问题。此前 <code>discourse_query</code> 会在大库上重建或扫描过多候选，导致线上工具调用超过 10 秒并被取消。修复后改为先通过 SQLite 做更小范围的候选预筛，再进行排序，从而让论坛检索能在服务器配置下稳定返回。</p>\n<p>工具调用默认超时时间也从 10 秒延长到 60 秒，避免慢但有效的检索被过早取消。</p>\n<h3><a name=\"p-24151-h-26-source-registry-qdrant-fallback-9\" class=\"anchor\" href=\"#p-24151-h-26-source-registry-qdrant-fallback-9\" aria-label=\"Heading link\"></a>2.6 检索路由、source registry 与 qdrant fallback</h3>\n<p>真实对话中暴露出一个新的检索路由问题：用户问“有没有比较靠谱的资料可以看？”或“官方没有比较好的教程吗？”时，LLM 会自然生成 <code>filters.source=official_docs</code>，但当前数据库中官方文档实际 source 是 <code>github_docs</code>。这导致 <code>qdrant_search</code> 很快返回空结果，然后系统给出“知识库没有足够证据”的错误回复。</p>\n<p>本周对此做了更稳的架构调整：</p>\n<ol>\n<li>\n<p>新增 retrieval source registry，统一定义当前合法 source。</p>\n</li>\n<li>\n<p>planner prompt 注入合法 source 和每个 source 的内容范围。</p>\n</li>\n<li>\n<p>运行时对 LLM 生成的 filters 做 normalize 和 validation。</p>\n</li>\n<li>\n<p>将 <code>official_docs</code>、<code>docs</code>、<code>documentation</code> 等别名映射到 <code>github_docs</code>。</p>\n</li>\n<li>\n<p>对带 filter 的 <code>qdrant_search</code> 空结果，在预算允许时自动去掉 filter 重试一次。</p>\n</li>\n<li>\n<p>广泛资料类问题优先走快速 vector search，避免无过滤条件下进入 BM25/fuzzy/exact 慢路径。</p>\n</li>\n</ol>\n<p>当前实际可用 source 包括：</p>\n<ol>\n<li>\n<p><code>github_docs</code>：官方文档、<a href=\"http://docs.nervos.org\" rel=\"noopener nofollow ugc\">docs.nervos.org</a>、RFC、CKB/CCC/Fiber 仓库文档、SDK 文档和代码示例。</p>\n</li>\n<li>\n<p><code>nervos_talk</code>：Nervos Talk 论坛帖子、社区讨论、Spark/grant/proposal、生态项目介绍和真实案例。</p>\n</li>\n</ol>\n<p>这让 source 选择不再依赖模型猜测，也避免了模型生成不存在的 source 后直接导致检索为空。</p>\n<h3><a name=\"p-24151-h-27-prompt-ask_user-10\" class=\"anchor\" href=\"#p-24151-h-27-prompt-ask_user-10\" aria-label=\"Heading link\"></a>2.7 Prompt 与 ask_user 路由修复</h3>\n<p>本周对 full graph 中的 InfoGap、RetrieverPlanner、Reflection、DirectAnswer、AnswerComposer 等 prompt 进行了集中调整。</p>\n<p>调优重点不是增加大量关键词规则，而是让 LLM 更清楚地区分：</p>\n<ol>\n<li>\n<p>哪些信息需要问用户；</p>\n</li>\n<li>\n<p>哪些信息属于公开资料，应由系统检索；</p>\n</li>\n<li>\n<p>什么时候应该基于合理默认假设继续回答；</p>\n</li>\n<li>\n<p>什么时候应该说明证据边界，而不是泛泛拒答；</p>\n</li>\n<li>\n<p>什么时候可以直接利用上文回答，而不是重新检索或重复追问；</p>\n</li>\n<li>\n<p>用户在纠正 Bot 答非所问时，应直接道歉并重新聚焦，而不是继续检索旧问题。</p>\n</li>\n</ol>\n<p>同时，路由层增加了保护逻辑：</p>\n<ol>\n<li>\n<p><code>ask_user</code> 不再输出硬编码兜底回复。</p>\n</li>\n<li>\n<p>如果没有真正的用户私有必填信息，graph 不应进入 <code>ask_user</code>。</p>\n</li>\n<li>\n<p>post-answer reflection 如果把公开资料缺口错误路由到 <code>ask_user</code>，会被改路由到继续检索、改写或回答。</p>\n</li>\n<li>\n<p>“你是不是回复错问题了”“答非所问”“不是这个问题”这类质量反馈会直接进入 direct answer，不触发资料检索。</p>\n</li>\n</ol>\n<p>这部分修复了“靠谱资料”追问被错误回复成 Fiber/testnet 默认假设，以及用户纠错后反而收到“知识库证据不足”的问题。</p>\n<h3><a name=\"p-24151-h-28-telegram-11\" class=\"anchor\" href=\"#p-24151-h-28-telegram-11\" aria-label=\"Heading link\"></a>2.8 Telegram 文件与图片输入支持</h3>\n<p>本周补充了 Telegram 附件处理能力。此前系统只能识别用户上传了文件或图片，但不会下载，也不会把内容交给模型。</p>\n<p>现在支持：</p>\n<ol>\n<li>\n<p>文本类文件下载和内容注入，包括 <code>.txt</code>、<code>.md</code>、<code>.json</code>、<code>.yaml</code>、<code>.csv</code>、<code>.log</code>、常见代码文件和配置文件。</p>\n</li>\n<li>\n<p>图片下载后保存本地路径，并在回答生成阶段作为 image input 传给支持多模态的 LLM。</p>\n</li>\n<li>\n<p>对附件设置大小限制：文本文件 256KB，图片 20MB。</p>\n</li>\n<li>\n<p>明确不支持 PDF，避免引入复杂解析和不稳定依赖。</p>\n</li>\n</ol>\n<p>这让用户可以在 Telegram 中上传日志、代码片段、配置文件或截图，让 Bot 直接基于附件内容回答。</p>\n<h3><a name=\"p-24151-h-29-debug-12\" class=\"anchor\" href=\"#p-24151-h-29-debug-12\" aria-label=\"Heading link\"></a>2.9 Debug 日志与部署支持</h3>\n<p>为了支持群内测问题定位，本周补充了更完整的 debug 信息，包括节点耗时、LLM 调用摘要、检索证据数量、tool trace、reflection 决策、route decision、终止证据不足原因等。</p>\n<p>同时补充了服务器部署相关支持：</p>\n<ol>\n<li>\n<p>增加 Linux 版 Telegram Bot 重启脚本。</p>\n</li>\n<li>\n<p>调整 <code>environment.yml</code>，去掉 Windows 专属依赖，改成跨平台最小环境。</p>\n</li>\n<li>\n<p>整理 <code>.gitignore</code>，避免提交真实配置、日志、群聊记录、周报、个人文档和缓存文件。</p>\n</li>\n<li>\n<p>清理 GitHub 仓库历史，确保公开仓库不包含敏感信息和本地资料。</p>\n</li>\n<li>\n<p>通过日志确认线上 bot 进程启动时间、加载后端和实际请求链路。</p>\n</li>\n</ol>\n<h2><a name=\"p-24151-telegram-13\" class=\"anchor\" href=\"#p-24151-telegram-13\" aria-label=\"Heading link\"></a>三、Telegram 群内测反馈</h2>\n<p>本周内测的主要价值，是让系统问题从“离线样例中可能发生”变成“真实用户已经遇到”。几个比较典型的反馈包括：</p>\n<ol>\n<li>\n<p>用户希望 Bot 能理解连续追问，而不是每轮都像重新开始。</p>\n</li>\n<li>\n<p>用户不希望 Bot 插入普通群聊。</p>\n</li>\n<li>\n<p>当用户说“我是萌新，你自己决定”时，Bot 应该主动给方案，而不是继续追问。</p>\n</li>\n<li>\n<p>当用户问“有没有靠谱资料”“官方有没有教程”时，Bot 应该主动查官方文档，而不是给知识库证据不足。</p>\n</li>\n<li>\n<p>当用户说“你是不是回复错问题了”时，Bot 应该识别这是对回答质量的反馈，而不是继续沿着旧问题检索。</p>\n</li>\n<li>\n<p>技术回答需要稳定引用资料来源，否则用户难以判断可信度。</p>\n</li>\n<li>\n<p>对代码示例类问题，即使证据不完整，也应给出清晰的骨架和边界说明。</p>\n</li>\n<li>\n<p>响应时间需要继续优化，尤其是复杂问题经过多节点 graph 后等待时间较长。</p>\n</li>\n<li>\n<p>用户希望可以上传文件、日志或截图，让 Bot 直接分析。</p>\n</li>\n</ol>\n<p>这些反馈说明，当前系统的核心挑战已经不只是“能不能回答”，而是“是否像一个可靠、不过度打扰、能持续理解上下文、能使用资料和附件的群内助手”。</p>\n<h2><a name=\"p-24151-h-14\" class=\"anchor\" href=\"#p-24151-h-14\" aria-label=\"Heading link\"></a>四、测试与验证</h2>\n<p>本周围绕 Telegram 群内测暴露的问题补充和运行了多组回归测试，重点覆盖：</p>\n<ol>\n<li>\n<p>群内同用户上下文读取。</p>\n</li>\n<li>\n<p>不同用户记忆隔离。</p>\n</li>\n<li>\n<p>AskUser checkpoint 按用户恢复。</p>\n</li>\n<li>\n<p>群聊 mention-only 触发策略。</p>\n</li>\n<li>\n<p>未触发消息不写入 memory。</p>\n</li>\n<li>\n<p>多线程并发处理与同一用户消息顺序保证。</p>\n</li>\n<li>\n<p>Telegram <code>sendMessage</code> Markdown/plain/detached fallback。</p>\n</li>\n<li>\n<p>offset 在异常场景下仍推进，避免同一 update 无限重放。</p>\n</li>\n<li>\n<p>多库检索、论坛检索性能和工具 60 秒超时。</p>\n</li>\n<li>\n<p><code>qdrant_search</code> broad resource 查询优先 vector path。</p>\n</li>\n<li>\n<p>source registry 注入、source alias 规范化和空结果 fallback。</p>\n</li>\n<li>\n<p>response-quality feedback 直接回复，不进入检索。</p>\n</li>\n<li>\n<p>post-answer <code>ask_user</code> 防护和 ask_user 硬编码兜底移除。</p>\n</li>\n<li>\n<p>Telegram 文本文件下载、内容注入和图片作为 LLM image input。</p>\n</li>\n<li>\n<p>CSAT、feedback、BadCase 逻辑回归。</p>\n</li>\n</ol>\n<p>阶段性相关回归测试结果为：</p>\n<pre><code class=\"lang-plaintext\">\n192 passed, 2 warnings\n\n</code></pre>\n<p>后续需要把本周真实群测 bad case 继续沉淀为标准评测样例，避免类似问题再次回归。</p>\n<h2><a name=\"p-24151-h-15\" class=\"anchor\" href=\"#p-24151-h-15\" aria-label=\"Heading link\"></a>五、阶段性成果</h2>\n<p>本周完成后，系统从“群内可测”进一步推进到“线上可诊断、可修复、可回归”。主要成果包括：</p>\n<ol>\n<li>\n<p>Telegram 群内测开始形成真实反馈闭环。</p>\n</li>\n<li>\n<p>群内上下文实现按用户隔离。</p>\n</li>\n<li>\n<p>Bot 在群聊中默认只响应明确触发，减少打扰。</p>\n</li>\n<li>\n<p>不同用户问题可以多线程并发处理。</p>\n</li>\n<li>\n<p>GitHub 文档库和 Nervos Talk 论坛库进入 runtime 检索路径。</p>\n</li>\n<li>\n<p>检索 source registry 和 filter 校验让资料检索更稳定。</p>\n</li>\n<li>\n<p>过度追问、回答滞后、答非所问、引用缺失、Markdown 异常等真实问题得到系统修复。</p>\n</li>\n<li>\n<p>Telegram 发送失败不再导致 update 重放和模型 API 无限调用。</p>\n</li>\n<li>\n<p>Telegram 文本文件和图片开始进入模型上下文。</p>\n</li>\n<li>\n<p>Debug 日志和部署脚本更适合线上排查。</p>\n</li>\n</ol>\n<h2><a name=\"p-24151-h-16\" class=\"anchor\" href=\"#p-24151-h-16\" aria-label=\"Heading link\"></a>六、当前问题</h2>\n<ol>\n<li>\n<p>响应延迟仍需要优化。复杂问题经过多节点 graph、检索、反思和回答生成后，耗时仍偏长。</p>\n</li>\n<li>\n<p>Prompt 仍需要根据真实内测继续调优，特别是“主动给默认方案”和“避免编造”之间的平衡。</p>\n</li>\n<li>\n<p>引用稳定性还需要继续观察，尤其是连续追问、资料链接和代码示例场景。</p>\n</li>\n<li>\n<p>source registry 目前是静态配置，后续最好从实际 archive metadata 自动生成或定期校验。</p>\n</li>\n<li>\n<p>附件支持目前覆盖文本文件和图片，暂不支持 PDF、音频转写和复杂 Office 文档。</p>\n</li>\n<li>\n<p>内测反馈还没有形成结构化问卷和统计指标，目前仍偏人工观察。</p>\n</li>\n<li>\n<p>部署流程还可以继续标准化，例如补充 systemd、健康检查和日志轮转。</p>\n</li>\n</ol>\n<h2><a name=\"p-24151-week-9-17\" class=\"anchor\" href=\"#p-24151-week-9-17\" aria-label=\"Heading link\"></a>七、下周计划（Week 9）</h2>\n<p>下周重点不再是继续堆功能，而是围绕 Telegram 群内测反馈做系统调优、用户评估设计和线上运行稳定性提升。</p>\n<ol>\n<li>整理本周内测反馈和 bad case。</li>\n</ol>\n<p>将过度追问、回答滞后、引用缺失、上下文误用、检索为空、答非所问、发送失败等问题归类，形成可复盘的 bad case 列表。</p>\n<ol start=\"2\">\n<li>根据内测反馈继续调优 prompt 和 graph 行为。</li>\n</ol>\n<p>重点优化小白问题、连续追问、代码示例、公开资料检索、默认假设选择和证据边界说明。</p>\n<ol start=\"3\">\n<li>完善 source registry 和检索 metadata。</li>\n</ol>\n<p>让 planner 更稳定地理解 <code>github_docs</code>、<code>nervos_talk</code> 等 source 的内容范围，并考虑从 archive metadata 自动生成 source 提示，减少手工维护。</p>\n<ol start=\"4\">\n<li>制定 Telegram 内测问卷。</li>\n</ol>\n<p>设计一份简短问卷，用于收集用户对回答质量、上下文理解、响应速度、引用可信度、追问体验、附件分析能力和整体可用性的主观评价。</p>\n<ol start=\"5\">\n<li>建立内测反馈指标。</li>\n</ol>\n<p>初步统计满意度、常见失败类型、用户是否需要重复解释上下文、是否认为 Bot 打扰群聊、是否愿意继续使用等指标。</p>\n<ol start=\"6\">\n<li>将高价值内测样例加入 evaluation dataset。</li>\n</ol>\n<p>把真实群测中暴露出的典型问题转成可重复运行的评测样例，形成“内测反馈 → 测试集 → 回归验证”的闭环。</p>\n<ol start=\"7\">\n<li>继续优化响应延迟。</li>\n</ol>\n<p>基于 node timings 和 LLM trace 分析耗时瓶颈，优先优化简单问题、常见追问和资料类查询的快答路径。</p>\n<ol start=\"8\">\n<li>完善服务器部署与运行监控。</li>\n</ol>\n<p>在现有重启脚本基础上，补充更稳定的守护方式和日志观察流程，降低内测期间人工维护成本。</p>",
          "like_count": 0,
          "quote_count": 0
        }
      ]
    },
    {
      "topic_id": 10232,
      "title": "当 84% 的开发者都在用 AI Coding，CKB 开发者体验的下一步怎么走？（附完整调研与路线图）",
      "slug": "84-ai-coding-ckb",
      "url": "https://talk.nervos.org/t/84-ai-coding-ckb/10232",
      "created_at": "2026-05-06T13:07:08.610000+00:00",
      "last_posted_at": "2026-05-06T13:07:08.768000+00:00",
      "category_id": 32,
      "tags": [],
      "posters": [
        "Original Poster, Most Recent Poster"
      ],
      "recent_posts": [
        {
          "post_id": 24150,
          "post_number": 1,
          "topic_id": 10232,
          "topic_title": "当 84% 的开发者都在用 AI Coding，CKB 开发者体验的下一步怎么走？（附完整调研与路线图）",
          "topic_slug": "84-ai-coding-ckb",
          "author": "yixiu.ckbfans.bit",
          "created_at": "2026-05-06T13:07:08.768000+00:00",
          "updated_at": "2026-05-06T15:29:10.710000+00:00",
          "reply_to_post_number": null,
          "url": "https://talk.nervos.org/t/84-ai-coding-ckb/10232/1",
          "content_text": "摘要： AI 辅助编程现已成为软件开发的主流，84% 的开发者已将 AI 工具融入工作流。本报告系统性地审视了这一趋势下 CKB 生态在 AI 覆盖上的现状与差距——从开发者工具选择、AI 检索机制、到 Solana/Ethereum 的 AI First 最佳实践对标分析。核心发现包括：CKB 在 AI 可消费性上存在严重缺口（无 llms.txt、无官方 MCP、无 Skills等），Context7 代码片段量仅为竞品链 SDK 的 1/22–1/41；同时 Bolt.new 上的实测也表明，AI 工具已具备基础 CKB 编程能力，但在高级场景中仍会出现幻觉或功能性错误 。这不是一个可行性问题，而是一个准确性和丰富度的问题。结合以上信息最终给出我们认为最有价值的改进建议。\n以下是完整报告。\n目录\n1. 开发者 AI 编程习惯现状\n2. AI 处理开发者问题的检索习惯与流程\n3. Solana 与 Ethereum 的 AI First 实践与成效\n4. CKB 在 AI 平台上的现有覆盖评估\n5. 改进建议与行动计划\n6. 实施路线图\n附录\n1. 开发者 AI 编程习惯现状\n1.1 宏观数据\n84% 的开发者 使用或计划使用 AI 编程工具（2025 Stack Overflow 开发者调研）。其中，51% 的专业开发者每天使用 AI 工具，另有 17.7% 每周使用。\n41% 的代码 由 AI 生成或辅助完成。\nGitHub Copilot 活跃用户中，46% 的代码 由 AI 贡献（2022 年发布时仅为 27%）\nimage2880×1508 435 KB\n1.2 主流工具分类与市场格局\n开发者使用的 AI 编程工具可归为四大类：\n类型\n代表工具\n特点\n市场地位\nAI 原生 IDE\nCursor, Windsurf, Google Antigravity 等\n从零构建，AI 作为核心交互层；全项目上下文感知；深度 Agent 能力\n增长最快，Cursor 估值约 $50B\nIDE 插件/扩展\nGitHub Copilot, Cline, Augment Code, Amazon Q\n集成到现有编辑器（VS Code / JetBrains）；尽可能延续开发者现有工作流\n用户量最大，Copilot 用户超 2000 万\n终端/CLI Agent\nClaude Code, Aider\n命令行操作，无 GUI；自主性最强；适合复杂架构任务\n技术深度型开发者偏好\nWeb 平台\nBolt.new, Replit\n浏览器内运行，无需本地环境；适合快速原型和非开发者\n入门级 / 原型场景\n1.3 开发者工具选择的关键趋势\nAgent 模式成为标配 ：到 2026 年初，所有主流工具都已具备某种程度的 Agent 能力（自主推理、拆分子任务、执行代码、运行测试、自我纠错）。\n全项目上下文是核心竞争力 ：开发者最看重的技术标准是，工具能否理解整个项目，而非仅当前打开的文件。Cursor 与 Windsurf 的全代码库索引能力正是其核心卖点。\n多工具并用是常态 ：开发者可能用 Claude Code 处理复杂架构，用 Cursor 进行日常编码，再用 Copilot 完成快速补全。这意味着需要适配多种工具。\nMCP（Model Context Protocol）快速普及 ：MCP 是连接 AI 助手与外部数据源的开放标准，由 Anthropic 于 2024 年 11 月发布，已被 OpenAI、Google、Microsoft 等主流厂商采纳。截至 2025 年底，MCP 服务器累计下载量已超过 800 万，生态内服务器数量超过 5800 个。\n1.4 开发者评估 AI 工具的六大标准\n上下文窗口与代码库理解能力 ——能否理解整个项目。\nAgent 深度 ——是只能建议代码片段，还是能自主编写、测试、修复并提交。\n模型灵活性 ——是否绑定单一 AI 服务商。\n定价模型 ——订阅制 / 按量付费 / 自有密钥付费。\n编辑器集成 ——是否需要迁移现有工作流。\n企业合规 ——SOC 2、数据驻留等。\n1.5 对 CKB 目标开发者的启示\nCKB 的目标开发者群体（Web3 原生 + Web2 转型）在使用 AI 工具时呈现出不同的特点：\nWeb3 原生开发者 ：倾向使用 Cursor / Windsurf 等 AI 原生 IDE，重度依赖 Agent 模式处理复杂的链上逻辑；也会使用 Claude Code 等终端工具来应对底层架构。\nWeb2 转型开发者 ：更多使用 GitHub Copilot（学习成本低），依赖 AI 来弥补区块链知识差距；对文档检索和代码示例的依赖度极高。\n2. AI 处理开发者问题的检索习惯与流程\n2.1 AI 编程助手的分层检索机制\n当开发者向 AI 提问时（如\"如何在 CKB 上创建一个转账交易？\"），AI 按以下优先级检索信息：\n第 1 层：项目本地上下文（最高优先级）\n├── 项目配置文件（CLAUDE.md / .cursorrules / AGENTS.md 等）\n├── 当前打开的文件 & 最近编辑的文件\n├── 项目依赖（package.json / Cargo.toml 等）\n└── 全代码库向量索引（语义搜索）\n↓\n第 2 层：MCP 服务器（外部结构化数据源）\n├── Context7 MCP — 9000+ 库的实时文档检索\n├── DeepWiki MCP — GitHub 仓库的 AI 文档\n└── 自定义 MCP 服务器\n↓\n第 3 层：AI 模型训练数据（内置知识）\n├── 公开文档、教程、Stack Overflow 等\n└── GitHub 公开仓库代码\n↓\n第 4 层：Web 搜索（实时补充）\n├── 搜索引擎结果\n└── llms.txt / llms-full.txt 文件\n2.2 各检索层的详细工作原理\n第 1 层：项目本地上下文\nAI 编程助手会首先索引开发者的整个项目：\n向量化索引 ：Cursor 将整个项目编码为向量存储，Windsurf 的远程索引引擎可处理超大代码库。\n项目规则文件 ：这是开发者向 AI 注入项目特定知识的关键入口。\n配置文件\n适用工具\n格式\nCLAUDE.md\nClaude Code\nMarkdown，支持 5 层继承\n.cursorrules\nCursor\n纯文本/Markdown\ncopilot-instructions.md\nGitHub Copilot\nMarkdown\n.windsurfrules\nWindsurf\n纯文本/Markdown\nAGENTS.md\nOpenAI Codex/ChatGPT\nMarkdown\n第 2 层：MCP 服务器\nMCP 的工作流程如下：\n请求检测 ：AI 分析开发者的 prompt，识别涉及的库或框架。\n文档检索 ：MCP 服务器查询其索引的文档数据库。\n智能排序 ：通过排名算法筛选最相关的文档片段。\n上下文注入 ：将检索到的文档注入 prompt 的上下文窗口中。\n生成响应 ：LLM 基于最新、准确的文档生成代码。\nContext7 的工作方式 ：\n为超过 9000 个库的文档和代码示例建立索引。\n通过 resolve-library-id 和 get-library-docs 两个工具向 AI 客户端暴露数据。\nCKB CCC 库已收录于 Context7，包含 141 个代码片段，基准分 85.1。\nDeepWiki 的工作方式 ：\n为 GitHub 仓库自动生成 AI 驱动的交互式文档。\n通过 MCP 提供 read_wiki_structure 、read_wiki_contents 、ask_question 等工具。\nCKB CCC 已有完整的 Wiki 结构（六大章节，覆盖核心层、集成层、钱包集成、协议扩展等）。\n第 3 层：训练数据\nAI 模型的训练数据有时效性问题。对于 CKB 这样的小众链，训练数据中的 CKB 信息量远少于 Ethereum/Solana。这使得第 1、2 层的信息供给尤为关键——如果这两层没有足够准确的信息，AI 就会用训练数据里有限且可能过时的知识来\"推断\"答案。\n第 4 层：llms.txt 标准\nllms.txt 是一个新兴标准（类似 robots.txt），以 Markdown 格式提供网站内容的结构化摘要。\nllms-full.txt 则包含完整内容，供 AI 模型直接消费。\n已被 Vercel、Anthropic、Windsurf（Mintlify）等采用。\n可将 AI 获取文档时的 token 消耗降至十分之一。\n2.3 AI 处理开发者问题的典型流程\n开发者提问：\"如何用 CCC 在 CKB 上创建一个 UDT 转账？\"\n↓\n[1] 分析意图：识别关键词 CCC / CKB / UDT / 转账\n↓\n[2] 搜索本地上下文：检查项目依赖与现有代码\n↓\n[3] 查询 MCP 服务器：Context7 获取 CCC 文档 / DeepWiki 获取仓库架构\n↓\n[4] 补充训练数据中的 CKB 知识\n↓\n[5] 信息仍不足 → 触发 Web 搜索\n↓\n[6] 综合上下文，生成代码答案\n↓\n[7] 验证与迭代：自动运行 / 检查错误 / 修复\n关键痛点 ：如果在第 2–4 层都找不到足够准确的 CKB 信息，AI 极易产生“幻觉”——生成看似合理但实际错误的代码，这将严重损害开发者对工具的信任和开发效率。\n3. Solana 与 Ethereum 的 AI First 实践与成效\nSolana 和 Ethereum 是目前在 AI First 开发者体验方面投入最大、成效最显著的两条链。\n3.1 Solana：AI First 的标杆\nSolana 基金会已将 AI 辅助开发确立为一等战略 ，构建了业内最为完整的 AI 开发者工具链。\n3.1.1 官方 Solana Developer MCP（mcp.solana.com）\nSolana 基金会维护的官方 MCP 服务器，可直接集成到 Cursor、Windsurf、Claude Code 等 AI IDE 中：\n实时文档检索 ：自动查询 Solana 和 Anchor Framework 最新文档。\n账户查询 ：直接查询链上账户信息。\n交易分析 ：解析交易详情。\nCPI 语句生成 ：自动生成跨程序调用代码。\n部署方式 ：公共服务，地址为 mcp.solana.com/mcp，开发者仅需一行配置即可使用。\n3.1.2 AI Coding Skills 生态\nSolana 提出了 “Skills”模式 ——一套精心编写的指令集，旨在教会 AI 如何成为 Solana 专家开发者。这是极具创新性的做法：\nSkill 名称\n维护方\n覆盖领域\nsolana-dev-skill\nSolana Foundation\n端到端 Solana 开发：钱包连接、Anchor 程序、测试、安全\nhelius-phantom-skill\nHelius Labs\n前端 dApp 开发：React/RN + Phantom Connect SDK\nmetaplex-skill\nMetaplex Foundation\nNFT 开发：Core NFTs、Candy Machine、Umi/Kit SDK\nsolana-anchor-claude-skill\nQuickNode Labs\nAnchor 合约开发 + 测试（LiteSVM）\nsolana-game-skill\n社区\n游戏开发：C#、React Native、Unity SDK\nsolana-skills-plugin\n社区\n安全审计 + ZK Compression + 漏洞检测\npinocchio-skill\nSendAI\n高性能 Solana 程序（可减少 88-95% 计算单元）\nvulnhunter-skill\nSendAI\n安全漏洞检测与变体分析\nSkills 的关键特点 ：\n模型无关 ：兼容 OpenAI API、Claude API、Cursor Rules、ChatGPT 自定义指令。\n可组合 ：Skills 可以叠加使用。\n社区贡献机制 ：通过 TypeForm 提交新 Skill，形成开放生态。\nSKILL.md 标准 ：便携式技能文件，可被任何 Agent 框架加载。\n3.1.3 Helius 生态工具链（基础设施商的典范）\nHelius 作为 Solana 的基础设施商，构建了完整的 AI 开发者工具套件：\n工具\n功能\n规模\nHelius MCP Server\n结构化 API 调用（getBalance、parseTransaction 等）\n60+ 工具\nHelius Skills\n专家指令集（Build / DFlow / Phantom / SVM）\n4 大领域\nHelius CLI\n命令行工具，支持 --json 输出\n95+ 命令\nClaude Code Plugin\n一键安装，集成 MCP + Skills + 参考文件\n全家桶\n安装仅需一行命令：\nclaude mcp add helius npx helius-mcp@latest\n3.1.4 awesome-solana-ai 策展仓库\nSolana Foundation 维护的 awesome-solana-ai 仓库，系统性收录了：\nAI Coding Skills ：10+ 官方/社区 Skills（通用、DeFi、基础设施）。\nAI Agents ：12+ 链上 AI Agent 框架（Solana Agent Kit、Eliza、GOAT 等）。\nDeveloper Tools ：15+ 款 AI 增强开发工具（MCP 服务器、审计工具、IDE 等）。\nLearning Resources ：学习资源汇总。\n3.1.5 官方 AI 开发指南\nSolana 官方文档站（solana.com）设有专门 AI 开发入门指南，涵盖：\n如何在 IDE 中配置 Solana MCP。\n如何利用 AI 工具构建 Solana 应用。\nAI Agent 开发框架介绍。\n最佳实践。\n3.2 Ethereum：生态驱动的 AI 友好化\nEthereum 虽未像 Solana 那样由基金会统一推动 AI First 战略，但凭借庞大的生态和社区，其 AI 可消费性已相当成熟。\n3.2.1 llms.txt 标准的先行者\nEthereum 核心库已全面采用 llms.txt 标准：\n项目\nllms.txt\nllms-full.txt\nwagmi.sh\n/llms.txt\n/llms-full.txt\nviem.sh\n/llms.txt\n/llms-full.txt\nwagmi 在首页醒目位置展示了 AI 友好标识：\"Are you an LLM? View /llms.txt for optimized Markdown documentation, or /llms-full.txt for full documentation bundle \"。\nimage1920×940 232 KB\n3.2.2 Context7 上的压倒性覆盖\n以 viem（Ethereum TypeScript 接口库，在 CKB 生态中可与 CCC 类比）为例：\n指标\nviem (Ethereum)\nCCC (CKB)\n差距倍数\nContext7 代码片段数\n3,090 - 5,827\n141\n22x - 41x\n基准分\n90.35\n85.1\n1.06x\n来源信誉\nHigh\nMedium\n—\nllms.txt 独立索引\n（单独条目）\n—\nviem 在 Context7 上拥有多个索引条目（GitHub 仓库 + llms-full.txt + 文档站），总代码片段数超过 10,000 ，而 CCC 仅有 141 个。这意味着 AI 在生成 Ethereum 代码时拥有 70 倍以上 的参考素材。\n3.2.3 专属 MCP 服务器\nEthereum 生态已有多个专属 MCP 服务器：\nViemCP ：专为 viem & wagmi 构建的 MCP 服务器\n内嵌代码模式（零延迟，离线可用）。\n链上数据查询（余额、交易、合约状态）。\nwagmi React hooks 代码生成。\n实时文档访问。\nOnChains Dev MCP ：聚焦 viem 与 wagmi 文档的轻量级 MCP。\nEVM MCP Tools ：通用 EVM 链交互工具。\nRemix IDE RemixAI ：Solidity 智能合约的内置 AI 助手与 Copilot。\n3.2.4 丰富的 AI 配置模板\nEthereum 社区在 GitHub 上已有大量 Solidity/.cursorrules 模板流传，覆盖：\nSolidity + Hardhat 开发。\nSolidity + Foundry 开发。\nReact + wagmi + viem 前端开发。\nOpenZeppelin 合约模式。\n3.2.5 AI Agent 生态\nEthereum 在生态构建方面更进一步，其开发者门户（ethereum.org）已设有两个专门的策展页面：\nimage2290×1422 161 KB\n“Build onchain with agents” 专区，为 AI 代理堆栈提供结构化的以太坊知识。\nAI Agents 专题策展页面 ，系统性地介绍链上 AI Agent 生态。\n同时，社区还维护着 ETH Skills，这是一个专门为 AI Agent 提供实时以太坊开发文档的知识库，内容涵盖 Gas 成本、Solidity 模式、Layer 2、DeFi 组合性、安全、测试与生产部署等关键领域。\n3.3 其他链的 AI First 速览\n除了 Solana 和 Ethereum 的深度布局，越来越多的公链已直接将“Ask AI”或类似的文档查询助手内建至开发者门户。纵观它们的实践，将文档对内对外全面“打开”，已成为一种无需言明的新标配 。以下列举了其中一些链的主要动作：\n链\n核心工具/功能\n亮点 / 当前状态\nBNB Chain\nMCP、Ask AI MCP （IDE 内文档查询）\n推出了以 MCP（模型上下文协议）为基础的专门服务。\nPolkadot\nDocs MCP 、llms.txt 、SKILL.md、 Polkadot AI Chatbot\n一套AI resource，全面启用 MCP 集成，遵循新兴的 llms.txt 行业标准并提供分卷文档，并包含 Substrate MCP 及适用于智能合约开发的 Skill\nSui Network\nMCP 、** Ask Sui AI**\nNEAR Protocol\nllms.txt 、Docs MCP 、Agent Skills/Kit\n提供指向全文档的快链，专供 AI 编程助手使用，并为 Agent 开发提供了全栈组件（Near API JS、合约开发等技能）\nCosmos\nAsk AI\n由mintlify提供的AI机器人\nAptos\nAskAptos（聊天机器人） 、Agent Skills 、llms.txt\n在官方文档站显著位置内嵌 GPT 聊天机器人，并配合 Azure OpenAI 服务为开发者提供引导；同时全面发布 llms.txt\n4. CKB 在 AI 平台上的现有覆盖评估\n4.1 已有覆盖\ndeepwiki 平台\nimage2104×1730 236 KB\ncontext7 平台\nimage1916×2450 351 KB\n共同问题： 未及时同步代码到这两个平台，会导致 AI 拿到过时的信息。\n4.2 实测结果\n测试过程 ：分别在 bolt.new 和 replit.com 上使用相同的指令生成应用：\n编写一个 CKB 转账的 dApp 应用。\n二者的表现 ：\nbolt.new 一次性成功，交易可上链，并支持切换 testnet 与 mainnet，功能较完备\nReplit 未实现 testnet/mainnet 切换，首次发起交易时报 signer.client.addCellDepsOfKnownScripts is not a function 错误，将错误信息反馈给它后，它重新阅读 CCC 信息并修改代码，交易发送成功。\nimage1512×2852 358 KB\nDemo 链接（仅用于演示，请勿用它进行主网资产的交易） ：\nbolt.new: https://ckb-transfer-dapp-de-cffo.bolt.host/\nreplit: ckb-transfer-dapp–ckbfansdao.replit.app\n共同短板：两个平台对 MAX（最大可用金额）和 MIN（最小转账金额）的计算均出错——这类边界场景在现有 AI 训练数据和文档示例中覆盖不足。\n初步判断：AI 工具已经可以完成基础的 CKB 开发任务；但在涉及 CKB 特有概念（capacity 计算、cell 占用逻辑等）的高级场景下，AI 的知识明显不足，且难以自我纠正。\n4.3 缺口分析\n4.3.1 严重缺口（直接影响开发者体验）\n无 llms.txt / llms-full.txt ：docs.nervos.org 未提供 llms.txt，AI 工具无法高效获取官方文档。\n无项目规则文件模板 ：没有为 CKB 项目提供 .cursorrules / CLAUDE.md 等模板，开发者的 AI 助手缺少 CKB 特定上下文。\n无自定义 MCP 服务器 ：没有专门的 CKB MCP 服务器来提供 RPC 文档、Script 模板、错误码解释等结构化信息。\nScript 开发文档在 AI 平台上极度匮乏 ：ckb-std 仅 2 个代码片段，Rust/C Script 开发几乎无可供 AI 消费的结构化知识。\n参考示例不足 ：不仅需要提供正常情况下的代码示例，边界场景、高级用法的示例代码会显得尤为重要。\n4.3.2 中等缺口\n缺少 AI 友好的错误信息映射 ：CKB 的错误码与常见错误尚无结构化的解释文档可供 AI 检索。\n缺少端到端教程的 AI 友好版本 ：现有教程以 HTML 页面为主，而非 Markdown 或结构化格式。\n多语言 SDK 覆盖不均 ：Java SDK、Go SDK 等在 AI 平台上的覆盖较弱。\n4.3.3 轻度缺口\n社群问答未结构化 ：Discord/Telegram 中的高价值问答未被收集与结构化。\n缺少 CKB 特定的 AI 提示词工程指南 ：开发者不清楚如何更好地向 AI 描述 CKB 相关需求。\n4.4 对比总结：CKB vs Solana vs Ethereum\nAI First 维度\nSolana\nEthereum\nCKB\n官方 MCP 服务器\nmcp.solana.com（基金会维护）\n多个社区 MCP（ViemCP 等）\n无\nllms.txt\n文档站未确认\nwagmi/viem 全面支持\n无\nAI Coding Skills\n10+ 官方/社区 Skills\n多个 Skills，分门别类：ethskills.com 、github.com/austintgriffith/ethskills\n无\nAI 配置文件模板\n官方提供系统提示词示例\n社区分享 .cursorrules\n无\nContext7 代码片段\n丰富（多个 SDK）\n极丰富（viem 5800+）\n较少（CCC 141）\nAI 开发者指南\n官方文档站专页\n开发者页面设有 “Build onchain with agents” 专区：ethereum.org/developers/\n无\nAI Agent 生态\nSolana Agent Kit，12+ Agent\n专题页面策展：ethereum.org/ai-agents/\n无\nawesome-*-ai 策展\nawesome-solana-ai\n首页专区策展：ethereum.org/ai-agents/\n无\n基础设施商 AI 工具\nHelius 全家桶\nQuickNode MCP 等\n无\nClaude Code Plugin\nHelius 官方插件\n社区提供多个，例如 github.com/0xGval/evm-mcp-tools\n无\n4.5 从 Solana/Ethereum 等链的经验中 CKB 可借鉴的关键启示\n官方 MCP 服务器是基础中的基础 ：Solana 基金会亲自维护 mcp.solana.com，这是 AI First 战略的核心锚点。CKB 应将类似官方 MCP 服务器作为优先构建项。\nSkills/配置文件模板是成本效益比最高的投入 ：Solana 的 Skills 生态证明，精心编写的指令集能显著提升 AI 生成代码的准确性。CKB 应当立即着手创建等效的 .cursorrules / CLAUDE.md / SKILL.md 。\n通用标准达成共识，文档生产力彻底解放 ：越来越多的链开始兼容 MCP 协议，并发布自己的 llms.txt 。wagmi 在首页提示 “Are you an LLM?” 的做法值得借鉴，这让所有 AI 工具都能自动发现和消费文档。CKB 必须加速追赶这种基建潮流，否则我们精心撰写的文档在这些链面前就如同未被索引的孤岛，完全无法进入 AI 的“检索视野”。\n策展仓库建立生态认知 ：awesome-solana-ai 不仅是资源列表，更向开发者传递了“Solana 认真对待 AI 开发”的信号。CKB 可考虑在补齐相关AI toolkits 后建立类似的 awesome-ckb-ai 仓库。\n基础设施商协同是力量放大器 ：Helius 为 Solana 构建的 AI 工具链展示了基础设施商如何参与 AI First 生态。CKB 可鼓励生态伙伴构建类似工具。\n代码片段数量决定 AI 代码质量 ：viem 的 5800+ 代码片段 vs CCC 的 141 个，直接决定了 AI 生成代码的准确性与多样性。CKB 需要大幅增加文档中可运行的代码示例。\n“Ask AI”成为顶级生态的通用语言 ：BNB Chain、Polkadot、Aptos 和 Near 都在官网或 IDE 中内建了“Ask AI”或“Docs”助手。这意味着开发者遇到问题时，无需在不同的 Google 标签页间大海捞针，直接在开发者体验闭环内就能找到答案。对 CKB 而言，这极大地降低了新开发者的入门门槛。\n这些发现进一步印证：CKB 必须快速补齐我们在文档 AI 友好度和“开发者即时问答”等体验上的短板。接下来将在下一章节列出详细的行动建议。\n5. 改进建议与行动计划\n5.1 【P0】确保所有关键仓库已提交至 DeepWiki 与 Context7\n目标 ：让所有 AI 工具能高效、准确地获取到CKB相关的知识。\n具体行动 ：\n提交：比对目前DeepWiki、Context7上已有的CKB仓库，查缺补漏\n更新：有的仓库虽然已上传到这两个平台，但信息已过时，需做更新处理。\nDeepWiki目前不支持github action的方式更新，智能人工手动处理；\nContext7上的仓库可以添加github action来实时更新\n预期效果 ：开发者在 AI 编程工具里配置上 DeepWiki 与 Context7 两个 MCP 之后，在开发过程中询问到 AI 关于 CKB 的问题，AI 可以通过查询 MCP 服务器拿到更准确的答案。\n5.2 【P0】为 docs.nervos.org 添加 llms.txt\n目标 ：让所有 AI 工具能高效、准确地获取 CKB 官方文档。\n具体行动 ：\n在 docs.nervos.org 根目录添加 llms.txt （结构化目录 + 关键链接）。\n添加 llms-full.txt （完整文档的 Markdown 版本）。\n内容应覆盖：\nCKB 核心概念（Cell 模型、Script、Transaction）。\n开发入门指南。\nSDK 使用指南（CCC / Rust SDK / Go SDK / Java SDK…）。\nRPC 参考。\n常见错误与解决方案。\n部署指南。\n参考实现 ：\nVercel docs: https://vercel.com/docs/llms-full.txt\nAnthropic docs 使用 Mintlify 生成 llms.txt\ngithub 仓库提交到context7 之后，Context7 会自动生成llms.txt文档，可将其复制放入docs.nervos.org站点中。\n预期效果 ：所有 AI 编程工具在搜索 CKB 相关信息时，均可通过 llms.txt 快速获取最新、准确的官方文档，大幅减少幻觉。\n5.3 【P0】创建 AI 配置文件模板（.cursorrules / CLAUDE.md / SKILL.md）\n目标 ： 让使用任何 AI 工具的开发者在创建 CKB 项目时即获得 CKB 特定上下文\n具体行动 ：创建一套标准的 AI 配置文件模板，覆盖所有主流工具：：\nckb-project-template/\n├── .cursorrules # Cursor\n├── .windsurfrules # Windsurf\n├── .github/\n│ └── copilot-instructions.md # GitHub Copilot\n├── CLAUDE.md # Claude Code\n├── AGENTS.md # OpenAI Codex\n└── .cursor/\n└── rules/ # Cursor Rules (新版)\n配置文件核心内容应包括：CKB Cell 模型说明（强调与 EVM 的区别）、CCC SDK 推荐用法、Transaction 构建模式、常见错误及解决方案、重要参考链接等。\n分发渠道：内置到 create-ccc-app 模板和 offckb create 命令，在官方文档站提供下载，在 GitHub 仓库 README 中醒目引导。\n5.4 【P0】构建 CKB 专属 MCP 服务器\n目标 ：提供 CKB 生态的结构化、实时、可查询的文档服务，并使 AI Agent 能够查询 CKB 链上数据。\n现状 ：\n目前由 @jm9k 牵头实施的项目：github.com/sonami-tech/ckb-mcp。\n使用 Rust 编写，master 分支（主分支）最近一次更新在 8 个月前，develop 分支最近一次更新在两个月前。\nREADME 推荐搭配 Claude 使用，但使用 Windsurf 实测时发现兼容性不佳，会报授权错误：\nimage1920×1026 317 KB\n具体行动 ：\n大而全路线 ：基于 @modelcontextprotocol/sdk （TypeScript）构建 CKB MCP 服务器，内容方面参考 Jordan 的 ckb-mcp 项目实现，使用 TypeScript 语言可以降低贡献者的门槛。优点：开发者仅需安装一个 MCP；难点：MCP 开发人员需具备多面手能力，或需多团队协同。\n小而专路线 ：针对特定领域创建专门的 MCP，如 docs、rpc、ccc、script 等。优点：MCP server 职责明确，可由特定团队/人员负责编写与维护；缺点：对开发者不够友好，需安装多个 MCP 服务。\n技术实现参考 ：\n基于 @modelcontextprotocol/sdk 构建 TypeScript MCP 服务器。\n后端使用 RAG 管道，向量化所有 CKB 文档。\n部署为公共服务，开发者只需在 IDE 配置中添加 MCP 服务器地址即可。\n参考案例 ：\nsolana-mcp-official\nbnbchain-mcp\n5.5 【P1】增强 Script 开发在 AI 平台上的覆盖\n目标 ：让 AI 能准确指导 CKB Script（智能合约）开发。\n具体行动 ：\n充实 ckb-std 文档 ：当前在 Context7 上仅有 2 个代码片段，需大幅扩充：\n常用 syscall 使用示例（load_cell_data、load_script 等）。\n完整的 Script 开发从零到部署教程。\n常见 Script 模式（lock script / type script / 多签等）。\n创建 Script 示例仓库 ：专门存放各类 Script 示例，提交至 DeepWiki 与 Context7。\n提升 ckb-std 的 README 与文档质量 ：AI 平台会从 GitHub 仓库的 README、docs 目录、代码注释中提取信息，需确保这些内容丰富且结构化。\n5.6 【P1】优化现有仓库的 AI 可消费性\n目标 ：让 DeepWiki 和 Context7 能提取更多、更高质量的信息。\n具体行动 ：\n增强 README 文档 ：\n每个仓库的 README 应包含“Quick Start”章节，提供可复制粘贴的代码示例。\n添加常见用例的代码片段。\n确保所有公共 API 均配有文档注释。\n增加代码示例文件 ：\n在仓库中创建 /examples 或 /docs/examples 目录。\n每个示例都应独立可运行。\n覆盖最常见的开发场景。\n规范化代码注释 ：\n使用 JSDoc/TSDoc（TypeScript）或 rustdoc（Rust）格式。\n确保所有公共方法均包含参数说明与返回值说明。\nAI 平台会从代码注释中提取 API 用法信息。\n确保重要仓库均已提交 DeepWiki 和 Context7\n提升 Context7 上的每个Library的benchmark分数\nbenchmark 分数越高，AI获取到的内容越准确\n5.7 【P1】创建 awesome-ckb-ai 策展仓库（参考 awesome-solana-ai）\n目标 ：建立 CKB AI 开发者生态的统一入口，向开发者传达 CKB 认真对待 AI First 战略的信号。\n具体行动 ：\n在 GitHub 上创建 ckb-devrel/awesome-ckb-ai 仓库。\n分类收录：\nAI Coding Skills ：CKB 开发的 .cursorrules / CLAUDE.md / SKILL.md。\nMCP Servers ：CKB 官方 MCP + 社区 MCP。\nDeveloper Tools ：AI 增强的 CKB 开发工具。\nLearning Resources ：AI 友好的教程与文档。\nAI Agents ：基于 CKB 的 AI Agent 项目。\n设立社区贡献机制，鼓励生态开发者提交 PR。\n参考 ：awesome-solana-ai\n注 ：此项虽重要，但考虑到目前 CKB 上 AI 相关资源尚少，建议待资源丰富一些后再行实施。\n5.8 【P2】建立 AI 友好的错误信息知识库\n目标 ：当开发者遇到 CKB 错误并向 AI 求助时，AI 能给出准确的诊断与解决方案。\n具体行动 ：\n收集 CKB 开发中最常见的 50+ 条错误信息。\n为每条错误创建结构化文档（错误码 → 原因 → 解决方案 → 代码示例）。\n格式化为 Markdown，便于 AI 消费。\n集成至 MCP 服务器的 explain-error-code 工具中。\n同步至 docs.nervos.org 的 llms.txt 中。\n示例格式 ：\n## Error: TransactionFailedToVerify - ValidationFailure(-31)\n**含义**: Script 验证失败，group index 指向的 Script 执行返回非零退出码。\n**常见原因**:\n1. Lock Script 签名验证失败 — 使用了错误的私钥或地址。\n2. Type Script 逻辑错误 — 自定义 type script 的业务逻辑未满足。\n3. 容量不足 — 输出 Cell 的 capacity 不足以覆盖其占用空间。\n**调试步骤**:\n1. 使用 `ckb-debugger` 本地运行 Script 进行调试。\n2. 检查 `tx.witnesses` 是否已正确设置。\n3. 确认所有输入 Cell 的 lock script 对应的签名均已提供。\n**代码修复示例**:\n...\n5.9 【P2】创建 AI 友好的端到端教程\n目标 ：提供完整的、AI 可消费的教程内容。\n具体行动 ：\n将现有教程转换为 Markdown 格式，存放于专门的 GitHub 仓库中。\n教程应覆盖：\nCKB 基础：创建钱包、CKB 转账。\n进阶：部署 Script、创建 UDT、使用 Spore Protocol。\n集成：连接各类钱包（MetaMask/JoyID/UniSat 等）。\n高级：DAO 操作、跨链桥接。\n每个教程均应提供可直接运行的完整代码。\n提交至 DeepWiki 与 Context7。\n参考示例：dob-cookbook\n5.10 【P2】结构化社群 FAQ\n目标 ：将 Discord/Telegram 中反复出现的问题转化为 AI 可检索的知识。\n具体行动 ：\n定期从社区中收集高频问题。\n创建结构化 FAQ 文档（GitHub 仓库）。\n格式化为 Q&A 对，便于 AI RAG 检索。\n可考虑借助 AI 自动从社群消息中提取与归类问题。\n5.11 【P2】创建 CKB AI 开发者指南（参考 Solana 官方 AI 指南）\n目标 ：教会开发者如何更好地利用 AI 进行 CKB 开发。\n具体行动 ：\n撰写“如何用 AI 高效开发 CKB 应用”指南。\n内容包括：\n推荐的 AI 工具配置（如何配置 Context7 MCP、DeepWiki MCP）。\n如何写好 CKB 相关的 AI 提示词。\nCKB 项目的 AI 配置文件模板使用指南。\n常见 AI 幻觉陷阱及如何避免。\n6. 实施路线图\n6.1 Phase 1 — 低成本高收益（追平基线）\n#\n行动项\n参考对标\n预期影响\n1\n确保所有关键仓库已提交至 DeepWiki 与 Context7\n—\n2\n为 docs.nervos.org 添加 llms.txt / llms-full.txt\nwagmi.sh / viem.sh\n3\n创建 AI 配置文件模板（.cursorrules / CLAUDE.md / SKILL.md）\nSolana Skills 生态\n4\n创建 awesome-ckb-ai 策展仓库\nawesome-solana-ai\n目前 AI 相关资源有限（可先收集素材后期进行）\n6.2 Phase 2 — 内容增强（缩小差距）\n#\n行动项\n参考对标\n预期影响\n5\n充实重要仓库文档与示例\nviem 5800+ 片段\n6\n优化各仓库 README 与代码注释\nviem/wagmi 文档质量\n7\n创建错误信息知识库\nPhase 3（1–2 月）— 基础设施建设（建立竞争力）\n#\n行动项\n参考对标\n预期影响\n8\n构建 CKB 官方 MCP 服务器（mcp.ckb.dev）\nmcp.solana.com\n9\n创建 AI 友好的端到端教程\nSolana AI 开发指南\n10\n编写 CKB AI 开发者指南\nsolana.com/developers/guides/getstarted/intro-to-ai\n11\n结构化开发者社区 FAQ\nPhase 4（持续）— 生态维护与扩展\n#\n行动项\n参考对标\n预期影响\n12\n定期更新所有 AI 平台上的内容\n—\n13\n监控 AI 工具对 CKB 信息的回答质量\n—\n14\n鼓励生态伙伴构建 AI 工具（类似 Helius 之于 Solana）\nHelius AI 全家桶\n15\n探索 CKB Agent Kit（连接 AI Agent 到 CKB 协议）\nSolana Agent Kit\n写在最后\n这份报告的逻辑起点其实很朴素： AI 已悄然成为开发者与陌生技术之间最重要的那层中介。如果 AI 不理解 CKB，开发者也就很难顺畅地上手 CKB。\n好在，这是一个可以被系统性解决的问题。Solana 在一年多前面临着几乎一样的处境，他们通过官方 MCP、Skills 生态和 llms.txt 等一系列投入，明显改善了 AI 对 Solana 的理解质量。CKB 完全可以走同样的路径，而且起点并不低——DeepWiki 和 Context7 的基础覆盖已经到位，bolt.new 上的实测也验证了基础场景已能跑通。\nPhase 1 的三件事（补全仓库索引、AI 配置文件模板、llms.txt）的工作量预计不到两周，但能让每一个用 Cursor 或 Claude Code 开发 CKB 的人立即受益。这里是一个很不错的开始。\n值得一提的是，这份调研报告本身就是与 AI 协同完成的：我们让 Gemini Pro 和 Claude Sonnet 两个模型交叉验证了 CKB 在 AI 编程方面的缺口与行动计划，因为 AI 天然更清楚自己还“缺什么”。在整个过程中，@Hanssen 和 @RetricSu 以及几位来自 DevRel 团队的小伙伴为报告提供了不可或缺的校准与反馈，在此深表感谢。\n最后，想邀请你一起参与两件事：\n如果你正在用 AI 开发 CKB 应用 ，欢迎在评论区分享你遇到的 AI 幻觉案例——那些 AI 一本正经地胡说八道的真实瞬间。每一个例子都会直接帮助我们把 CKB 的 AI 开发体验打磨得更准确。\n如果你发现报告中还有遗漏的 AI 相关实践或方向 ，也欢迎在评论区留言斧正，我们会持续跟踪迭代。\n感谢每一位读到这里的你。\n附录\nA. Solana AI First 生态关键链接\n资源\n链接\nSolana Developer MCP（官方）\nhttps://mcp.solana.com/\nsolana-mcp-official (GitHub)\nGitHub - solana-foundation/solana-mcp-official · GitHub\nawesome-solana-ai\nGitHub - solana-foundation/awesome-solana-ai: Public repo of AI tooling to help build on Solana · GitHub\nsolana-dev-skill\nGitHub - solana-foundation/solana-dev-skill: Skills for agentic development on Solana (March 2026 best practices) · GitHub\nSolana AI 开发入门指南\nhttps://solana.com/developers/guides/getstarted/intro-to-ai\nHelius AI 工具\nHow to Use AI to Build Solana Apps (2026)\nHelius MCP Server\nHelius MCP Server - Helius Docs\nSolana Agent Kit (SendAI)\nGitHub - sendaifun/solana-agent-kit: connect any ai agents to solana protocols · GitHub\nB. Ethereum AI First 生态部分链接\n资源\n链接\nwagmi llms.txt\nhttps://wagmi.sh/llms.txt\nwagmi llms-full.txt\nhttps://wagmi.sh/llms-full.txt\nviem 文档\nhttps://viem.sh/\nevm-mcp-server\nGitHub - mcpdotdirect/evm-mcp-server: MCP server that provides LLMs with tools for interacting with EVM networks · GitHub\nRemix IDE AI Tools\nAI Tools — Remix - Ethereum IDE 1 documentation\nETH Skills\nhttps://ethskills.com/\nEthereum “Build onchain with agents”\nEthereum Developer Resources\nEthereum AI Agents 策展\nAI agents | AI agents on Ethereum | ethereum.org\nC. 参考资料\n2025 Stack Overflow Developer Survey\nBest AI Coding Agents: 9 Tools Compared\nCursor vs GitHub Copilot Deep Comparison\nAI Coding Config Files Compared\nWhy MCP Servers Are Essential for Documentation Sites\nWhat is llms.txt? Breaking Down the Skepticism\nContext7 MCP Server Guide\nHow to Use AI to Build Solana Apps (Helius)\nawesome-solana-ai (Solana Foundation)\nSolana Developer MCP",
          "content_html": "<p><strong>摘要</strong>： AI 辅助编程现已成为软件开发的主流，84% 的开发者已将 AI 工具融入工作流。本报告系统性地审视了这一趋势下 CKB 生态在 AI 覆盖上的现状与差距——从开发者工具选择、AI 检索机制、到 Solana/Ethereum 的 AI First 最佳实践对标分析。核心发现包括：CKB 在 AI 可消费性上存在严重缺口（无 llms.txt、无官方 MCP、无 Skills等），Context7 代码片段量仅为竞品链 SDK 的 1/22–1/41；同时 <a href=\"https://bolt.new/\">Bolt.new</a> 上的实测也表明，AI 工具<strong>已具备基础 CKB 编程能力，但在高级场景中仍会出现幻觉或功能性错误</strong> 。这不是一个可行性问题，而是一个准确性和丰富度的问题。结合以上信息最终给出我们认为最有价值的改进建议。</p>\n<p>以下是完整报告。</p>\n<hr>\n<h2><a name=\"p-24150-h-1\" class=\"anchor\" href=\"#p-24150-h-1\" aria-label=\"Heading link\"></a>目录</h2>\n<ul>\n<li><a href=\"#p-24150-h-1-ai-2\">1. 开发者 AI 编程习惯现状</a></li>\n<li><a href=\"#p-24150-h-2-ai-8\">2. AI 处理开发者问题的检索习惯与流程</a></li>\n<li><a href=\"#p-24150-h-3-solana-ethereum-ai-first-13\">3. Solana 与 Ethereum 的 AI First 实践与成效</a></li>\n<li><a href=\"#p-24150-h-4-ckb-ai-27\">4. CKB 在 AI 平台上的现有覆盖评估</a></li>\n<li><a href=\"#p-24150-h-5-36\">5. 改进建议与行动计划</a></li>\n<li><a href=\"#p-24150-h-6-48\">6. 实施路线图</a></li>\n<li><a href=\"#p-24150-h-54\">附录</a></li>\n</ul>\n<h2><a name=\"p-24150-h-1-ai-2\" class=\"anchor\" href=\"#p-24150-h-1-ai-2\" aria-label=\"Heading link\"></a>1. 开发者 AI 编程习惯现状</h2>\n<h3><a name=\"p-24150-h-11-3\" class=\"anchor\" href=\"#p-24150-h-11-3\" aria-label=\"Heading link\"></a>1.1 宏观数据</h3>\n<ul>\n<li><strong>84% 的开发者</strong> 使用或计划使用 AI 编程工具（2025 Stack Overflow 开发者调研）。其中，51% 的专业开发者每天使用 AI 工具，另有 17.7% 每周使用。</li>\n<li><strong>41% 的代码</strong> 由 AI 生成或辅助完成。</li>\n<li>GitHub Copilot 活跃用户中，<strong>46% 的代码</strong> 由 AI 贡献（2022 年发布时仅为 27%）</li>\n</ul>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/8/8923e0ee67e006b40241c61f10b8c9a5440b1aa1.png\" data-download-href=\"https://talk.nervos.org/uploads/default/8923e0ee67e006b40241c61f10b8c9a5440b1aa1\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/8/8923e0ee67e006b40241c61f10b8c9a5440b1aa1_2_690x361.png\" alt=\"image\" data-base62-sha1=\"jzchy9wjPbqyrP7V2FG9HJY1dgR\" width=\"690\" height=\"361\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/8/8923e0ee67e006b40241c61f10b8c9a5440b1aa1_2_690x361.png, https://talk.nervos.org/uploads/default/optimized/2X/8/8923e0ee67e006b40241c61f10b8c9a5440b1aa1_2_1035x541.png 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/8/8923e0ee67e006b40241c61f10b8c9a5440b1aa1_2_1380x722.png 2x\" data-dominant-color=\"7D8389\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">2880×1508 435 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<h3><a name=\"p-24150-h-12-4\" class=\"anchor\" href=\"#p-24150-h-12-4\" aria-label=\"Heading link\"></a>1.2 主流工具分类与市场格局</h3>\n<p>开发者使用的 AI 编程工具可归为四大类：</p>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>类型</th>\n<th>代表工具</th>\n<th>特点</th>\n<th>市场地位</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td><strong>AI 原生 IDE</strong></td>\n<td>Cursor, Windsurf, <a href=\"https://antigravity.google/\">Google Antigravity</a> 等</td>\n<td>从零构建，AI 作为核心交互层；全项目上下文感知；深度 Agent 能力</td>\n<td>增长最快，Cursor 估值约 $50B</td>\n</tr>\n<tr>\n<td><strong>IDE 插件/扩展</strong></td>\n<td>GitHub Copilot, Cline, Augment Code, Amazon Q</td>\n<td>集成到现有编辑器（VS Code / JetBrains）；尽可能延续开发者现有工作流</td>\n<td>用户量最大，Copilot 用户超 2000 万</td>\n</tr>\n<tr>\n<td><strong>终端/CLI Agent</strong></td>\n<td>Claude Code, Aider</td>\n<td>命令行操作，无 GUI；自主性最强；适合复杂架构任务</td>\n<td>技术深度型开发者偏好</td>\n</tr>\n<tr>\n<td><strong>Web 平台</strong></td>\n<td><a href=\"http://bolt.new/\">Bolt.new</a>, <a href=\"http://replit.com/\">Replit</a></td>\n<td>浏览器内运行，无需本地环境；适合快速原型和非开发者</td>\n<td>入门级 / 原型场景</td>\n</tr>\n</tbody>\n</table>\n</div><h3><a name=\"p-24150-h-13-5\" class=\"anchor\" href=\"#p-24150-h-13-5\" aria-label=\"Heading link\"></a>1.3 开发者工具选择的关键趋势</h3>\n<ol>\n<li><strong>Agent 模式成为标配</strong> ：到 2026 年初，所有主流工具都已具备某种程度的 Agent 能力（自主推理、拆分子任务、执行代码、运行测试、自我纠错）。</li>\n<li><strong>全项目上下文是核心竞争力</strong> ：开发者最看重的技术标准是，工具能否理解整个项目，而非仅当前打开的文件。Cursor 与 Windsurf 的全代码库索引能力正是其核心卖点。</li>\n<li><strong>多工具并用是常态</strong> ：开发者可能用 Claude Code 处理复杂架构，用 Cursor 进行日常编码，再用 Copilot 完成快速补全。这意味着需要适配多种工具。</li>\n<li><strong>MCP（Model Context Protocol）快速普及</strong> ：MCP 是连接 AI 助手与外部数据源的开放标准，由 Anthropic 于 2024 年 11 月发布，已被 OpenAI、Google、Microsoft 等主流厂商采纳。截至 2025 年底，MCP 服务器累计下载量已超过 800 万，生态内服务器数量超过 5800 个。</li>\n</ol>\n<h3><a name=\"p-24150-h-14-ai-6\" class=\"anchor\" href=\"#p-24150-h-14-ai-6\" aria-label=\"Heading link\"></a>1.4 开发者评估 AI 工具的六大标准</h3>\n<ol>\n<li><strong>上下文窗口与代码库理解能力</strong> ——能否理解整个项目。</li>\n<li><strong>Agent 深度</strong> ——是只能建议代码片段，还是能自主编写、测试、修复并提交。</li>\n<li><strong>模型灵活性</strong> ——是否绑定单一 AI 服务商。</li>\n<li><strong>定价模型</strong> ——订阅制 / 按量付费 / 自有密钥付费。</li>\n<li><strong>编辑器集成</strong> ——是否需要迁移现有工作流。</li>\n<li><strong>企业合规</strong> ——SOC 2、数据驻留等。</li>\n</ol>\n<h3><a name=\"p-24150-h-15-ckb-7\" class=\"anchor\" href=\"#p-24150-h-15-ckb-7\" aria-label=\"Heading link\"></a>1.5 对 CKB 目标开发者的启示</h3>\n<p>CKB 的目标开发者群体（Web3 原生 + Web2 转型）在使用 AI 工具时呈现出不同的特点：</p>\n<ul>\n<li><strong>Web3 原生开发者</strong> ：倾向使用 Cursor / Windsurf 等 AI 原生 IDE，重度依赖 Agent 模式处理复杂的链上逻辑；也会使用 Claude Code 等终端工具来应对底层架构。</li>\n<li><strong>Web2 转型开发者</strong> ：更多使用 GitHub Copilot（学习成本低），依赖 AI 来弥补区块链知识差距；对文档检索和代码示例的依赖度极高。</li>\n</ul>\n<hr>\n<h2><a name=\"p-24150-h-2-ai-8\" class=\"anchor\" href=\"#p-24150-h-2-ai-8\" aria-label=\"Heading link\"></a>2. AI 处理开发者问题的检索习惯与流程</h2>\n<h3><a name=\"p-24150-h-21-ai-9\" class=\"anchor\" href=\"#p-24150-h-21-ai-9\" aria-label=\"Heading link\"></a>2.1 AI 编程助手的分层检索机制</h3>\n<p>当开发者向 AI 提问时（如\"如何在 CKB 上创建一个转账交易？\"），AI 按以下优先级检索信息：</p>\n<pre><code class=\"lang-auto\">第 1 层：项目本地上下文（最高优先级）\n  ├── 项目配置文件（CLAUDE.md / .cursorrules / AGENTS.md 等）\n  ├── 当前打开的文件 &amp; 最近编辑的文件\n  ├── 项目依赖（package.json / Cargo.toml 等）\n  └── 全代码库向量索引（语义搜索）\n        ↓\n第 2 层：MCP 服务器（外部结构化数据源）\n  ├── Context7 MCP — 9000+ 库的实时文档检索\n  ├── DeepWiki MCP — GitHub 仓库的 AI 文档\n  └── 自定义 MCP 服务器\n        ↓\n第 3 层：AI 模型训练数据（内置知识）\n  ├── 公开文档、教程、Stack Overflow 等\n  └── GitHub 公开仓库代码\n        ↓\n第 4 层：Web 搜索（实时补充）\n  ├── 搜索引擎结果\n  └── llms.txt / llms-full.txt 文件\n</code></pre>\n<h3><a name=\"p-24150-h-22-10\" class=\"anchor\" href=\"#p-24150-h-22-10\" aria-label=\"Heading link\"></a>2.2 各检索层的详细工作原理</h3>\n<p><strong>第 1 层：项目本地上下文</strong><br>\nAI 编程助手会首先索引开发者的整个项目：</p>\n<ul>\n<li><strong>向量化索引</strong> ：Cursor 将整个项目编码为向量存储，Windsurf 的远程索引引擎可处理超大代码库。</li>\n<li><strong>项目规则文件</strong> ：这是开发者向 AI 注入项目特定知识的关键入口。</li>\n</ul>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>配置文件</th>\n<th>适用工具</th>\n<th>格式</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td><code>CLAUDE.md</code></td>\n<td>Claude Code</td>\n<td>Markdown，支持 5 层继承</td>\n</tr>\n<tr>\n<td><code>.cursorrules</code></td>\n<td>Cursor</td>\n<td>纯文本/Markdown</td>\n</tr>\n<tr>\n<td><code>copilot-instructions.md</code></td>\n<td>GitHub Copilot</td>\n<td>Markdown</td>\n</tr>\n<tr>\n<td><code>.windsurfrules</code></td>\n<td>Windsurf</td>\n<td>纯文本/Markdown</td>\n</tr>\n<tr>\n<td><code>AGENTS.md</code></td>\n<td>OpenAI Codex/ChatGPT</td>\n<td>Markdown</td>\n</tr>\n</tbody>\n</table>\n</div><p><strong>第 2 层：MCP 服务器</strong></p>\n<p>MCP 的工作流程如下：</p>\n<ol>\n<li><strong>请求检测</strong> ：AI 分析开发者的 prompt，识别涉及的库或框架。</li>\n<li><strong>文档检索</strong> ：MCP 服务器查询其索引的文档数据库。</li>\n<li><strong>智能排序</strong> ：通过排名算法筛选最相关的文档片段。</li>\n<li><strong>上下文注入</strong> ：将检索到的文档注入 prompt 的上下文窗口中。</li>\n<li><strong>生成响应</strong> ：LLM 基于最新、准确的文档生成代码。</li>\n</ol>\n<p><strong>Context7 的工作方式</strong> ：</p>\n<ul>\n<li>为超过 9000 个库的文档和代码示例建立索引。</li>\n<li>通过 <code>resolve-library-id</code> 和 <code>get-library-docs</code> 两个工具向 AI 客户端暴露数据。</li>\n<li><a href=\"https://context7.com/ckb-devrel/ccc\">CKB CCC</a> 库已收录于 Context7，包含 141 个代码片段，基准分 85.1。</li>\n</ul>\n<p><strong>DeepWiki 的工作方式</strong> ：</p>\n<ul>\n<li>为 GitHub 仓库自动生成 AI 驱动的交互式文档。</li>\n<li>通过 MCP 提供 <code>read_wiki_structure</code> 、<code>read_wiki_contents</code> 、<code>ask_question</code> 等工具。</li>\n<li><a href=\"https://deepwiki.com/ckb-devrel/ccc\">CKB CCC</a> 已有完整的 Wiki 结构（六大章节，覆盖核心层、集成层、钱包集成、协议扩展等）。</li>\n</ul>\n<p><strong>第 3 层：训练数据</strong></p>\n<p>AI 模型的训练数据有时效性问题。对于 CKB 这样的小众链，训练数据中的 CKB 信息量远少于 Ethereum/Solana。这使得第 1、2 层的信息供给尤为关键——如果这两层没有足够准确的信息，AI 就会用训练数据里有限且可能过时的知识来\"推断\"答案。</p>\n<h4><a name=\"p-24150-h-4-llmstxt-11\" class=\"anchor\" href=\"#p-24150-h-4-llmstxt-11\" aria-label=\"Heading link\"></a>第 4 层：llms.txt 标准</h4>\n<ul>\n<li><code>llms.txt</code> 是一个新兴标准（类似 robots.txt），以 Markdown 格式提供网站内容的结构化摘要。</li>\n<li><code>llms-full.txt</code> 则包含完整内容，供 AI 模型直接消费。</li>\n<li>已被 Vercel、Anthropic、Windsurf（Mintlify）等采用。</li>\n<li>可将 AI 获取文档时的 token 消耗降至十分之一。</li>\n</ul>\n<h3><a name=\"p-24150-h-23-ai-12\" class=\"anchor\" href=\"#p-24150-h-23-ai-12\" aria-label=\"Heading link\"></a>2.3 AI 处理开发者问题的典型流程</h3>\n<pre><code class=\"lang-auto\">开发者提问：\"如何用 CCC 在 CKB 上创建一个 UDT 转账？\"\n  ↓\n[1] 分析意图：识别关键词 CCC / CKB / UDT / 转账\n  ↓\n[2] 搜索本地上下文：检查项目依赖与现有代码\n  ↓\n[3] 查询 MCP 服务器：Context7 获取 CCC 文档 / DeepWiki 获取仓库架构\n  ↓\n[4] 补充训练数据中的 CKB 知识\n  ↓\n[5] 信息仍不足 → 触发 Web 搜索\n  ↓\n[6] 综合上下文，生成代码答案\n  ↓\n[7] 验证与迭代：自动运行 / 检查错误 / 修复\n</code></pre>\n<p><strong>关键痛点</strong> ：如果在第 2–4 层都找不到足够准确的 CKB 信息，AI 极易产生“幻觉”——生成看似合理但实际错误的代码，这将严重损害开发者对工具的信任和开发效率。</p>\n<hr>\n<h2><a name=\"p-24150-h-3-solana-ethereum-ai-first-13\" class=\"anchor\" href=\"#p-24150-h-3-solana-ethereum-ai-first-13\" aria-label=\"Heading link\"></a>3. Solana 与 Ethereum 的 AI First 实践与成效</h2>\n<p>Solana 和 Ethereum 是目前在 AI First 开发者体验方面投入最大、成效最显著的两条链。</p>\n<h3><a name=\"p-24150-h-31-solanaai-first-14\" class=\"anchor\" href=\"#p-24150-h-31-solanaai-first-14\" aria-label=\"Heading link\"></a>3.1 Solana：AI First 的标杆</h3>\n<p>Solana 基金会已将 AI 辅助开发确立为<strong>一等战略</strong> ，构建了业内最为完整的 AI 开发者工具链。</p>\n<h4><a name=\"p-24150-h-311-solana-developer-mcpmcpsolanacomhttpmcpsolanacom-15\" class=\"anchor\" href=\"#p-24150-h-311-solana-developer-mcpmcpsolanacomhttpmcpsolanacom-15\" aria-label=\"Heading link\"></a>3.1.1 官方 Solana Developer MCP（<a href=\"http://mcp.solana.com/\">mcp.solana.com</a>）</h4>\n<p>Solana 基金会维护的官方 MCP 服务器，可直接集成到 Cursor、Windsurf、Claude Code 等 AI IDE 中：</p>\n<ul>\n<li><strong>实时文档检索</strong> ：自动查询 Solana 和 Anchor Framework 最新文档。</li>\n<li><strong>账户查询</strong> ：直接查询链上账户信息。</li>\n<li><strong>交易分析</strong> ：解析交易详情。</li>\n<li><strong>CPI 语句生成</strong> ：自动生成跨程序调用代码。</li>\n<li><strong>部署方式</strong> ：公共服务，地址为 <a href=\"http://mcp.solana.com/mcp\"><code>mcp.solana.com/mcp</code></a>，开发者仅需一行配置即可使用。</li>\n</ul>\n<h4><a name=\"p-24150-h-312-ai-coding-skills-16\" class=\"anchor\" href=\"#p-24150-h-312-ai-coding-skills-16\" aria-label=\"Heading link\"></a>3.1.2 AI Coding Skills 生态</h4>\n<p>Solana 提出了 <strong>“Skills”模式</strong> ——一套精心编写的指令集，旨在教会 AI 如何成为 Solana 专家开发者。这是极具创新性的做法：</p>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>Skill 名称</th>\n<th>维护方</th>\n<th>覆盖领域</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td><a href=\"https://github.com/solana-foundation/solana-dev-skill/\"><code>solana-dev-skill</code></a></td>\n<td>Solana Foundation</td>\n<td>端到端 Solana 开发：钱包连接、Anchor 程序、测试、安全</td>\n</tr>\n<tr>\n<td><code>helius-phantom-skill</code></td>\n<td>Helius Labs</td>\n<td>前端 dApp 开发：React/RN + Phantom Connect SDK</td>\n</tr>\n<tr>\n<td><code>metaplex-skill</code></td>\n<td>Metaplex Foundation</td>\n<td>NFT 开发：Core NFTs、Candy Machine、Umi/Kit SDK</td>\n</tr>\n<tr>\n<td><code>solana-anchor-claude-skill</code></td>\n<td>QuickNode Labs</td>\n<td>Anchor 合约开发 + 测试（LiteSVM）</td>\n</tr>\n<tr>\n<td><code>solana-game-skill</code></td>\n<td>社区</td>\n<td>游戏开发：C#、React Native、Unity SDK</td>\n</tr>\n<tr>\n<td><code>solana-skills-plugin</code></td>\n<td>社区</td>\n<td>安全审计 + ZK Compression + 漏洞检测</td>\n</tr>\n<tr>\n<td><code>pinocchio-skill</code></td>\n<td>SendAI</td>\n<td>高性能 Solana 程序（可减少 88-95% 计算单元）</td>\n</tr>\n<tr>\n<td><code>vulnhunter-skill</code></td>\n<td>SendAI</td>\n<td>安全漏洞检测与变体分析</td>\n</tr>\n</tbody>\n</table>\n</div><p><strong>Skills 的关键特点</strong> ：</p>\n<ul>\n<li><strong>模型无关</strong> ：兼容 OpenAI API、Claude API、Cursor Rules、ChatGPT 自定义指令。</li>\n<li><strong>可组合</strong> ：Skills 可以叠加使用。</li>\n<li><strong>社区贡献机制</strong> ：通过 TypeForm 提交新 Skill，形成开放生态。</li>\n<li><a href=\"http://skill.md/\"><strong>SKILL.md</strong></a> <strong>标准</strong> ：便携式技能文件，可被任何 Agent 框架加载。</li>\n</ul>\n<h4><a name=\"p-24150-h-313-helius-17\" class=\"anchor\" href=\"#p-24150-h-313-helius-17\" aria-label=\"Heading link\"></a>3.1.3 Helius 生态工具链（基础设施商的典范）</h4>\n<p>Helius 作为 Solana 的基础设施商，构建了完整的 AI 开发者工具套件：</p>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>工具</th>\n<th>功能</th>\n<th>规模</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td><strong>Helius MCP Server</strong></td>\n<td>结构化 API 调用（getBalance、parseTransaction 等）</td>\n<td>60+ 工具</td>\n</tr>\n<tr>\n<td><strong>Helius Skills</strong></td>\n<td>专家指令集（Build / DFlow / Phantom / SVM）</td>\n<td>4 大领域</td>\n</tr>\n<tr>\n<td><strong>Helius CLI</strong></td>\n<td>命令行工具，支持 --json 输出</td>\n<td>95+ 命令</td>\n</tr>\n<tr>\n<td><strong>Claude Code Plugin</strong></td>\n<td>一键安装，集成 MCP + Skills + 参考文件</td>\n<td>全家桶</td>\n</tr>\n</tbody>\n</table>\n</div><p>安装仅需一行命令：</p>\n<pre><code class=\"lang-auto\">claude mcp add helius npx helius-mcp@latest\n</code></pre>\n<h4><a name=\"p-24150-h-314-awesome-solana-ai-18\" class=\"anchor\" href=\"#p-24150-h-314-awesome-solana-ai-18\" aria-label=\"Heading link\"></a>3.1.4 awesome-solana-ai 策展仓库</h4>\n<p>Solana Foundation 维护的 <a href=\"https://github.com/solana-foundation/awesome-solana-ai\">awesome-solana-ai</a> 仓库，系统性收录了：</p>\n<ul>\n<li><strong>AI Coding Skills</strong> ：10+ 官方/社区 Skills（通用、DeFi、基础设施）。</li>\n<li><strong>AI Agents</strong> ：12+ 链上 AI Agent 框架（Solana Agent Kit、Eliza、GOAT 等）。</li>\n<li><strong>Developer Tools</strong> ：15+ 款 AI 增强开发工具（MCP 服务器、审计工具、IDE 等）。</li>\n<li><strong>Learning Resources</strong> ：学习资源汇总。</li>\n</ul>\n<h4><a name=\"p-24150-h-315-ai-19\" class=\"anchor\" href=\"#p-24150-h-315-ai-19\" aria-label=\"Heading link\"></a>3.1.5 官方 AI 开发指南</h4>\n<p>Solana 官方文档站（<a href=\"http://solana.com/\">solana.com</a>）设有专门 <a href=\"https://solana.com/developers/guides/getstarted/intro-to-ai\">AI 开发入门指南</a>，涵盖：</p>\n<ul>\n<li>如何在 IDE 中配置 Solana MCP。</li>\n<li>如何利用 AI 工具构建 Solana 应用。</li>\n<li>AI Agent 开发框架介绍。</li>\n<li>最佳实践。</li>\n</ul>\n<h3><a name=\"p-24150-h-32-ethereum-ai-20\" class=\"anchor\" href=\"#p-24150-h-32-ethereum-ai-20\" aria-label=\"Heading link\"></a>3.2 Ethereum：生态驱动的 AI 友好化</h3>\n<p>Ethereum 虽未像 Solana 那样由基金会统一推动 AI First 战略，但凭借庞大的生态和社区，其 AI 可消费性已相当成熟。</p>\n<h4><a name=\"p-24150-h-321-llmstxt-21\" class=\"anchor\" href=\"#p-24150-h-321-llmstxt-21\" aria-label=\"Heading link\"></a>3.2.1 llms.txt 标准的先行者</h4>\n<p>Ethereum 核心库已全面采用 <code>llms.txt</code> 标准：</p>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>项目</th>\n<th>llms.txt</th>\n<th>llms-full.txt</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td><a href=\"http://wagmi.sh/\"><strong>wagmi.sh</strong></a></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> <a href=\"https://wagmi.sh/llms.txt\"><code>/llms.txt</code></a></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> <a href=\"https://wagmi.sh/llms-full.txt\"><code>/llms-full.txt</code></a></td>\n</tr>\n<tr>\n<td><a href=\"http://viem.sh/\"><strong>viem.sh</strong></a></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> <a href=\"https://viem.sh/llms.txt\"><code>/llms.txt</code></a></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> <a href=\"https://viem.sh/llms-full.txt\"><code>/llms-full.txt</code></a></td>\n</tr>\n</tbody>\n</table>\n</div><p>wagmi 在首页醒目位置展示了 AI 友好标识：\"<strong>Are you an LLM? View /llms.txt for optimized Markdown documentation, or /llms-full.txt for full documentation bundle</strong> \"。<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/b/b7e1ad908c0266e7e087bbf094740d783cebf59b.jpeg\" data-download-href=\"https://talk.nervos.org/uploads/default/b7e1ad908c0266e7e087bbf094740d783cebf59b\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/b/b7e1ad908c0266e7e087bbf094740d783cebf59b_2_690x337.jpeg\" alt=\"image\" data-base62-sha1=\"qeGTaXMKUSSXFuB3zp3u5pDFbh9\" width=\"690\" height=\"337\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/b/b7e1ad908c0266e7e087bbf094740d783cebf59b_2_690x337.jpeg, https://talk.nervos.org/uploads/default/optimized/2X/b/b7e1ad908c0266e7e087bbf094740d783cebf59b_2_1035x505.jpeg 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/b/b7e1ad908c0266e7e087bbf094740d783cebf59b_2_1380x674.jpeg 2x\" data-dominant-color=\"EDEEF5\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1920×940 232 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<h4><a name=\"p-24150-h-322-context7-22\" class=\"anchor\" href=\"#p-24150-h-322-context7-22\" aria-label=\"Heading link\"></a>3.2.2 Context7 上的压倒性覆盖</h4>\n<p>以 viem（Ethereum TypeScript 接口库，在 CKB 生态中可与 CCC 类比）为例：</p>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>指标</th>\n<th>viem (Ethereum)</th>\n<th>CCC (CKB)</th>\n<th>差距倍数</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>Context7 代码片段数</td>\n<td><strong>3,090 - 5,827</strong></td>\n<td>141</td>\n<td><strong>22x - 41x</strong></td>\n</tr>\n<tr>\n<td>基准分</td>\n<td>90.35</td>\n<td>85.1</td>\n<td>1.06x</td>\n</tr>\n<tr>\n<td>来源信誉</td>\n<td>High</td>\n<td>Medium</td>\n<td>—</td>\n</tr>\n<tr>\n<td>llms.txt 独立索引</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\">（单独条目）</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/cross_mark.png?v=15\" title=\":cross_mark:\" class=\"emoji only-emoji\" alt=\":cross_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n<td>—</td>\n</tr>\n</tbody>\n</table>\n</div><p>viem 在 Context7 上拥有多个索引条目（GitHub 仓库 + llms-full.txt + 文档站），总代码片段数超过 <strong>10,000</strong> ，而 CCC 仅有 141 个。这意味着 AI 在生成 Ethereum 代码时拥有 <strong>70 倍以上</strong> 的参考素材。</p>\n<h4><a name=\"p-24150-h-323-mcp-23\" class=\"anchor\" href=\"#p-24150-h-323-mcp-23\" aria-label=\"Heading link\"></a>3.2.3 专属 MCP 服务器</h4>\n<p>Ethereum 生态已有多个专属 MCP 服务器：</p>\n<ul>\n<li><strong>ViemCP</strong> ：专为 viem &amp; wagmi 构建的 MCP 服务器\n<ul>\n<li>内嵌代码模式（零延迟，离线可用）。</li>\n<li>链上数据查询（余额、交易、合约状态）。</li>\n<li>wagmi React hooks 代码生成。</li>\n<li>实时文档访问。</li>\n</ul>\n</li>\n<li><strong>OnChains Dev MCP</strong> ：聚焦 viem 与 wagmi 文档的轻量级 MCP。</li>\n<li><strong>EVM MCP Tools</strong> ：通用 EVM 链交互工具。</li>\n<li><strong>Remix IDE RemixAI</strong> ：Solidity 智能合约的内置 AI 助手与 Copilot。</li>\n</ul>\n<h4><a name=\"p-24150-h-324-ai-24\" class=\"anchor\" href=\"#p-24150-h-324-ai-24\" aria-label=\"Heading link\"></a>3.2.4 丰富的 AI 配置模板</h4>\n<p>Ethereum 社区在 GitHub 上已有大量 Solidity/.cursorrules 模板流传，覆盖：</p>\n<ul>\n<li>Solidity + Hardhat 开发。</li>\n<li>Solidity + Foundry 开发。</li>\n<li>React + wagmi + viem 前端开发。</li>\n<li>OpenZeppelin 合约模式。</li>\n</ul>\n<h4><a name=\"p-24150-h-325-ai-agent-25\" class=\"anchor\" href=\"#p-24150-h-325-ai-agent-25\" aria-label=\"Heading link\"></a>3.2.5 AI Agent 生态</h4>\n<p>Ethereum 在生态构建方面更进一步，其开发者门户（<a href=\"https://ethereum.org/developers\">ethereum.org</a>）已设有两个专门的策展页面：<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/5/5458a1063d52be5c21d2c9fe5f3fb0d92b74322c.png\" data-download-href=\"https://talk.nervos.org/uploads/default/5458a1063d52be5c21d2c9fe5f3fb0d92b74322c\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/5/5458a1063d52be5c21d2c9fe5f3fb0d92b74322c_2_690x428.png\" alt=\"image\" data-base62-sha1=\"c29ZE0bPsTA2CAMxjW0Vs5bciE4\" width=\"690\" height=\"428\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/5/5458a1063d52be5c21d2c9fe5f3fb0d92b74322c_2_690x428.png, https://talk.nervos.org/uploads/default/optimized/2X/5/5458a1063d52be5c21d2c9fe5f3fb0d92b74322c_2_1035x642.png 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/5/5458a1063d52be5c21d2c9fe5f3fb0d92b74322c_2_1380x856.png 2x\" data-dominant-color=\"E8E4DF\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">2290×1422 161 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<ul>\n<li><strong>“Build onchain with agents”</strong> 专区，为 AI 代理堆栈提供结构化的以太坊知识。</li>\n<li><strong><a href=\"https://ethereum.org/ai-agents/\">AI Agents 专题策展页面</a></strong> ，系统性地介绍链上 AI Agent 生态。</li>\n</ul>\n<p>同时，社区还维护着 <a href=\"https://ethskills.com/\">ETH Skills</a>，这是一个专门为 AI Agent 提供实时以太坊开发文档的知识库，内容涵盖 Gas 成本、Solidity 模式、Layer 2、DeFi 组合性、安全、测试与生产部署等关键领域。</p>\n<h3><a name=\"p-24150-h-33-ai-first-26\" class=\"anchor\" href=\"#p-24150-h-33-ai-first-26\" aria-label=\"Heading link\"></a>3.3 其他链的 AI First 速览</h3>\n<p>除了 Solana 和 Ethereum 的深度布局，越来越多的公链已直接将“Ask AI”或类似的文档查询助手内建至开发者门户。纵观它们的实践，<strong>将文档对内对外全面“打开”，已成为一种无需言明的新标配</strong> 。以下列举了其中一些链的主要动作：</p>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>链</th>\n<th>核心工具/功能</th>\n<th>亮点 / 当前状态</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td><strong><a href=\"https://docs.bnbchain.org/showcase/mcp/\">BNB Chain</a></strong></td>\n<td><strong>MCP</strong>、<strong>Ask AI MCP</strong> （IDE 内文档查询）</td>\n<td>推出了以 MCP（模型上下文协议）为基础的专门服务。</td>\n</tr>\n<tr>\n<td><strong><a href=\"https://docs.polkadot.com/ai-resources/\">Polkadot</a></strong></td>\n<td><strong>Docs MCP</strong> 、<strong>llms.txt</strong> 、<strong>SKILL.md</strong>、 <strong>Polkadot AI Chatbot</strong></td>\n<td>一套AI resource，全面启用 MCP 集成，遵循新兴的 <code>llms.txt</code> 行业标准并提供分卷文档，并包含 Substrate MCP 及适用于智能合约开发的 Skill</td>\n</tr>\n<tr>\n<td><strong><a href=\"https://docs.sui.io/\">Sui Network</a></strong></td>\n<td><strong>MCP</strong> 、** Ask Sui AI**</td>\n<td></td>\n</tr>\n<tr>\n<td><strong><a href=\"https://docs.near.org/getting-started/tools-for-ai\">NEAR Protocol</a></strong></td>\n<td><strong>llms.txt</strong> 、<strong>Docs MCP</strong> 、<strong>Agent Skills/Kit</strong></td>\n<td>提供指向全文档的快链，专供 AI 编程助手使用，并为 Agent 开发提供了全栈组件（Near API JS、合约开发等技能）</td>\n</tr>\n<tr>\n<td><strong>Cosmos</strong></td>\n<td><strong>Ask AI</strong></td>\n<td>由mintlify提供的AI机器人</td>\n</tr>\n<tr>\n<td><strong><a href=\"https://aptos.dev/build/ai\">Aptos</a></strong></td>\n<td><strong>AskAptos（聊天机器人）</strong> 、<strong>Agent Skills</strong> 、<strong>llms.txt</strong></td>\n<td>在官方文档站显著位置内嵌 GPT 聊天机器人，并配合 Azure OpenAI 服务为开发者提供引导；同时全面发布 llms.txt</td>\n</tr>\n</tbody>\n</table>\n</div><hr>\n<h2><a name=\"p-24150-h-4-ckb-ai-27\" class=\"anchor\" href=\"#p-24150-h-4-ckb-ai-27\" aria-label=\"Heading link\"></a>4. CKB 在 AI 平台上的现有覆盖评估</h2>\n<h3><a name=\"p-24150-h-41-28\" class=\"anchor\" href=\"#p-24150-h-41-28\" aria-label=\"Heading link\"></a>4.1 已有覆盖</h3>\n<ul>\n<li>\n<p><strong>deepwiki 平台</strong><br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/6/626266067e2c1a58697aac59e74ea30d64f010b6.png\" data-download-href=\"https://talk.nervos.org/uploads/default/626266067e2c1a58697aac59e74ea30d64f010b6\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/6/626266067e2c1a58697aac59e74ea30d64f010b6_2_608x500.png\" alt=\"image\" data-base62-sha1=\"e2lBOPcMz3Dgxk56kcjocR7eErI\" width=\"608\" height=\"500\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/6/626266067e2c1a58697aac59e74ea30d64f010b6_2_608x500.png, https://talk.nervos.org/uploads/default/optimized/2X/6/626266067e2c1a58697aac59e74ea30d64f010b6_2_912x750.png 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/6/626266067e2c1a58697aac59e74ea30d64f010b6_2_1216x1000.png 2x\" data-dominant-color=\"F1F0EF\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">2104×1730 236 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n</li>\n<li>\n<p><strong>context7 平台</strong><br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/b/b442b0b18d5472475cb042f0dff5ed2fbb6e55b6.jpeg\" data-download-href=\"https://talk.nervos.org/uploads/default/b442b0b18d5472475cb042f0dff5ed2fbb6e55b6\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/b/b442b0b18d5472475cb042f0dff5ed2fbb6e55b6_2_391x500.jpeg\" alt=\"image\" data-base62-sha1=\"pIEPk2L2sLN6MTl4vs36LZBpNlQ\" width=\"391\" height=\"500\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/b/b442b0b18d5472475cb042f0dff5ed2fbb6e55b6_2_391x500.jpeg, https://talk.nervos.org/uploads/default/optimized/2X/b/b442b0b18d5472475cb042f0dff5ed2fbb6e55b6_2_586x750.jpeg 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/b/b442b0b18d5472475cb042f0dff5ed2fbb6e55b6_2_782x1000.jpeg 2x\" data-dominant-color=\"F2F3F1\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1916×2450 351 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n</li>\n</ul>\n<p><strong>共同问题：</strong> 未及时同步代码到这两个平台，会导致 AI 拿到过时的信息。</p>\n<h3><a name=\"p-24150-h-42-29\" class=\"anchor\" href=\"#p-24150-h-42-29\" aria-label=\"Heading link\"></a>4.2 实测结果</h3>\n<p><strong>测试过程</strong> ：分别在 <a href=\"http://bolt.new/\">bolt.new</a> 和 <a href=\"http://replit.com/\">replit.com</a> 上使用相同的指令生成应用：</p>\n<blockquote>\n<p>编写一个 CKB 转账的 dApp 应用。</p>\n</blockquote>\n<p><strong>二者的表现</strong> ：</p>\n<ul>\n<li><a href=\"http://bolt.new/\">bolt.new</a> 一次性成功，交易可上链，并支持切换 testnet 与 mainnet，功能较完备 <img src=\"https://talk.nervos.org/images/emoji/apple/+1.png?v=15\" title=\":+1:\" class=\"emoji\" alt=\":+1:\" loading=\"lazy\" width=\"20\" height=\"20\"></li>\n<li>Replit 未实现 testnet/mainnet 切换，首次发起交易时报 <code>signer.client.addCellDepsOfKnownScripts is not a function</code> 错误，将错误信息反馈给它后，它重新阅读 CCC 信息并修改代码，交易发送成功。<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/8/805dfab1a7fdb8e52a037886f837ba3fa4fac26d.jpeg\" data-download-href=\"https://talk.nervos.org/uploads/default/805dfab1a7fdb8e52a037886f837ba3fa4fac26d\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/8/805dfab1a7fdb8e52a037886f837ba3fa4fac26d_2_265x500.jpeg\" alt=\"image\" data-base62-sha1=\"ijAsSQIdHCy2Vvtq8zS3hkIs7db\" width=\"265\" height=\"500\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/8/805dfab1a7fdb8e52a037886f837ba3fa4fac26d_2_265x500.jpeg, https://talk.nervos.org/uploads/default/optimized/2X/8/805dfab1a7fdb8e52a037886f837ba3fa4fac26d_2_397x750.jpeg 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/8/805dfab1a7fdb8e52a037886f837ba3fa4fac26d_2_530x1000.jpeg 2x\" data-dominant-color=\"ECEDEB\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1512×2852 358 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></li>\n</ul>\n<p><strong>Demo 链接</strong>（<img src=\"https://talk.nervos.org/images/emoji/apple/warning.png?v=15\" title=\":warning:\" class=\"emoji\" alt=\":warning:\" loading=\"lazy\" width=\"20\" height=\"20\"><strong>仅用于演示，请勿用它进行主网资产的交易</strong>） ：</p>\n<ul>\n<li><a href=\"https://bolt.new/\">bolt.new</a>: <a href=\"https://ckb-transfer-dapp-de-cffo.bolt.host/\">https://ckb-transfer-dapp-de-cffo.bolt.host/</a></li>\n<li>replit: <a href=\"https://ckb-transfer-dapp--ckbfansdao.replit.app/\">ckb-transfer-dapp–ckbfansdao.replit.app </a></li>\n</ul>\n<p><strong>共同短板</strong>：两个平台对 MAX（最大可用金额）和 MIN（最小转账金额）的计算均出错——这类边界场景在现有 AI 训练数据和文档示例中覆盖不足。</p>\n<p><strong>初步判断</strong>：AI 工具已经可以完成基础的 CKB 开发任务；但在涉及 CKB 特有概念（capacity 计算、cell 占用逻辑等）的高级场景下，AI 的知识明显不足，且难以自我纠正。</p>\n<h3><a name=\"p-24150-h-43-30\" class=\"anchor\" href=\"#p-24150-h-43-30\" aria-label=\"Heading link\"></a>4.3 缺口分析</h3>\n<h4><a name=\"p-24150-h-431-31\" class=\"anchor\" href=\"#p-24150-h-431-31\" aria-label=\"Heading link\"></a>4.3.1 严重缺口（直接影响开发者体验）</h4>\n<ol>\n<li><strong>无 llms.txt / llms-full.txt</strong> ：<a href=\"http://docs.nervos.org/\">docs.nervos.org</a> 未提供 llms.txt，AI 工具无法高效获取官方文档。</li>\n<li><strong>无项目规则文件模板</strong> ：没有为 CKB 项目提供 <code>.cursorrules</code> / <code>CLAUDE.md</code> 等模板，开发者的 AI 助手缺少 CKB 特定上下文。</li>\n<li><strong>无自定义 MCP 服务器</strong> ：没有专门的 CKB MCP 服务器来提供 RPC 文档、Script 模板、错误码解释等结构化信息。</li>\n<li><strong>Script 开发文档在 AI 平台上极度匮乏</strong> ：<code>ckb-std</code> 仅 2 个代码片段，Rust/C Script 开发几乎无可供 AI 消费的结构化知识。</li>\n<li><strong>参考示例不足</strong> ：不仅需要提供正常情况下的代码示例，边界场景、高级用法的示例代码会显得尤为重要。</li>\n</ol>\n<h4><a name=\"p-24150-h-432-32\" class=\"anchor\" href=\"#p-24150-h-432-32\" aria-label=\"Heading link\"></a>4.3.2 中等缺口</h4>\n<ol>\n<li><strong>缺少 AI 友好的错误信息映射</strong> ：CKB 的错误码与常见错误尚无结构化的解释文档可供 AI 检索。</li>\n<li><strong>缺少端到端教程的 AI 友好版本</strong> ：现有教程以 HTML 页面为主，而非 Markdown 或结构化格式。</li>\n<li><strong>多语言 SDK 覆盖不均</strong> ：Java SDK、Go SDK 等在 AI 平台上的覆盖较弱。</li>\n</ol>\n<h4><a name=\"p-24150-h-433-33\" class=\"anchor\" href=\"#p-24150-h-433-33\" aria-label=\"Heading link\"></a>4.3.3 轻度缺口</h4>\n<ol>\n<li><strong>社群问答未结构化</strong> ：Discord/Telegram 中的高价值问答未被收集与结构化。</li>\n<li><strong>缺少 CKB 特定的 AI 提示词工程指南</strong> ：开发者不清楚如何更好地向 AI 描述 CKB 相关需求。</li>\n</ol>\n<h3><a name=\"p-24150-h-44-ckb-vs-solana-vs-ethereum-34\" class=\"anchor\" href=\"#p-24150-h-44-ckb-vs-solana-vs-ethereum-34\" aria-label=\"Heading link\"></a>4.4 对比总结：CKB vs Solana vs Ethereum</h3>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>AI First 维度</th>\n<th>Solana</th>\n<th>Ethereum</th>\n<th>CKB</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td><strong>官方 MCP 服务器</strong></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> <a href=\"http://mcp.solana.com/\">mcp.solana.com</a>（基金会维护）</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 多个社区 MCP（ViemCP 等）</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/cross_mark.png?v=15\" title=\":cross_mark:\" class=\"emoji\" alt=\":cross_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 无</td>\n</tr>\n<tr>\n<td><strong>llms.txt</strong></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/warning.png?v=15\" title=\":warning:\" class=\"emoji\" alt=\":warning:\" loading=\"lazy\" width=\"20\" height=\"20\"> 文档站未确认</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> wagmi/viem 全面支持</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/cross_mark.png?v=15\" title=\":cross_mark:\" class=\"emoji\" alt=\":cross_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 无</td>\n</tr>\n<tr>\n<td><strong>AI Coding Skills</strong></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 10+ 官方/社区 Skills</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 多个 Skills，分门别类：<a href=\"https://ethskills.com/\">ethskills.com</a> 、<a href=\"https://github.com/austintgriffith/ethskills\">github.com/austintgriffith/ethskills</a></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/cross_mark.png?v=15\" title=\":cross_mark:\" class=\"emoji\" alt=\":cross_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 无</td>\n</tr>\n<tr>\n<td><strong>AI 配置文件模板</strong></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 官方提供系统提示词示例</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> <a href=\"https://github.com/PatrickJS/awesome-cursorrules/blob/main/rules/solidity-hardhat-cursorrules-prompt-file/.cursorrules\">社区分享 .cursorrules</a></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/cross_mark.png?v=15\" title=\":cross_mark:\" class=\"emoji\" alt=\":cross_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 无</td>\n</tr>\n<tr>\n<td><strong>Context7 代码片段</strong></td>\n<td>丰富（多个 SDK）</td>\n<td>极丰富（viem 5800+）</td>\n<td>较少（CCC 141）</td>\n</tr>\n<tr>\n<td><strong>AI 开发者指南</strong></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 官方文档站专页</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 开发者页面设有 <strong>“Build onchain with agents”</strong> 专区：<a href=\"https://ethereum.org/developers/\">ethereum.org/developers/</a></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/cross_mark.png?v=15\" title=\":cross_mark:\" class=\"emoji\" alt=\":cross_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 无</td>\n</tr>\n<tr>\n<td><strong>AI Agent 生态</strong></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> Solana Agent Kit，12+ Agent</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 专题页面策展：<a href=\"https://ethereum.org/ai-agents/\">ethereum.org/ai-agents/</a></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/cross_mark.png?v=15\" title=\":cross_mark:\" class=\"emoji\" alt=\":cross_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 无</td>\n</tr>\n<tr>\n<td><strong>awesome-*-ai 策展</strong></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> awesome-solana-ai</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 首页专区策展：<a href=\"https://ethereum.org/ai-agents/\">ethereum.org/ai-agents/</a></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/cross_mark.png?v=15\" title=\":cross_mark:\" class=\"emoji\" alt=\":cross_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 无</td>\n</tr>\n<tr>\n<td><strong>基础设施商 AI 工具</strong></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> Helius 全家桶</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> QuickNode MCP 等</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/cross_mark.png?v=15\" title=\":cross_mark:\" class=\"emoji\" alt=\":cross_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 无</td>\n</tr>\n<tr>\n<td><strong>Claude Code Plugin</strong></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> Helius 官方插件</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/white_check_mark.png?v=15\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 社区提供多个，例如 <a href=\"https://github.com/0xGval/evm-mcp-tools\">github.com/0xGval/evm-mcp-tools</a></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/cross_mark.png?v=15\" title=\":cross_mark:\" class=\"emoji\" alt=\":cross_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> 无</td>\n</tr>\n</tbody>\n</table>\n</div><h3><a name=\"p-24150-h-45-solanaethereum-ckb-35\" class=\"anchor\" href=\"#p-24150-h-45-solanaethereum-ckb-35\" aria-label=\"Heading link\"></a>4.5 从 Solana/Ethereum 等链的经验中 CKB 可借鉴的关键启示</h3>\n<ol>\n<li><strong>官方 MCP 服务器是基础中的基础</strong> ：Solana 基金会亲自维护 <a href=\"http://mcp.solana.com/\">mcp.solana.com</a>，这是 AI First 战略的核心锚点。CKB 应将类似官方 MCP 服务器作为优先构建项。</li>\n<li><strong>Skills/配置文件模板是成本效益比最高的投入</strong> ：Solana 的 Skills 生态证明，精心编写的指令集能显著提升 AI 生成代码的准确性。CKB 应当立即着手创建等效的 <code>.cursorrules</code> / <code>CLAUDE.md</code> / <code>SKILL.md</code> 。</li>\n<li><strong>通用标准达成共识，文档生产力彻底解放</strong> ：越来越多的链开始兼容 MCP 协议，并发布自己的 <code>llms.txt</code> 。wagmi 在首页提示 “Are you an LLM?” 的做法值得借鉴，这让所有 AI 工具都能自动发现和消费文档。CKB 必须加速追赶这种基建潮流，否则我们精心撰写的文档在这些链面前就如同未被索引的孤岛，完全无法进入 AI 的“检索视野”。</li>\n<li><strong>策展仓库建立生态认知</strong> ：awesome-solana-ai 不仅是资源列表，更向开发者传递了“Solana 认真对待 AI 开发”的信号。CKB 可考虑在补齐相关AI toolkits 后建立类似的 awesome-ckb-ai 仓库。</li>\n<li><strong>基础设施商协同是力量放大器</strong> ：Helius 为 Solana 构建的 AI 工具链展示了基础设施商如何参与 AI First 生态。CKB 可鼓励生态伙伴构建类似工具。</li>\n<li><strong>代码片段数量决定 AI 代码质量</strong> ：viem 的 5800+ 代码片段 vs CCC 的 141 个，直接决定了 AI 生成代码的准确性与多样性。CKB 需要大幅增加文档中可运行的代码示例。</li>\n<li><strong>“Ask AI”成为顶级生态的通用语言</strong> ：BNB Chain、Polkadot、Aptos 和 Near 都在官网或 IDE 中内建了“Ask AI”或“Docs”助手。这意味着开发者遇到问题时，无需在不同的 Google 标签页间大海捞针，直接在开发者体验闭环内就能找到答案。对 CKB 而言，这极大地降低了新开发者的入门门槛。</li>\n</ol>\n<p>这些发现进一步印证：<strong>CKB 必须快速补齐我们在文档 AI 友好度和“开发者即时问答”等体验上的短板</strong>。接下来将在下一章节列出详细的行动建议。</p>\n<h2><a name=\"p-24150-h-5-36\" class=\"anchor\" href=\"#p-24150-h-5-36\" aria-label=\"Heading link\"></a>5. 改进建议与行动计划</h2>\n<h3><a name=\"p-24150-h-51-p0-deepwiki-context7-37\" class=\"anchor\" href=\"#p-24150-h-51-p0-deepwiki-context7-37\" aria-label=\"Heading link\"></a>5.1 【P0】确保所有关键仓库已提交至 DeepWiki 与 Context7</h3>\n<p><strong>目标</strong> ：让所有 AI 工具能高效、准确地获取到CKB相关的知识。</p>\n<p><strong>具体行动</strong> ：</p>\n<ul>\n<li><strong>提交</strong>：比对目前DeepWiki、Context7上已有的CKB仓库，查缺补漏</li>\n<li><strong>更新</strong>：有的仓库虽然已上传到这两个平台，但信息已过时，需做更新处理。\n<ul>\n<li>DeepWiki目前不支持github action的方式更新，智能人工手动处理；</li>\n<li>Context7上的仓库可以添加github action来实时更新</li>\n</ul>\n</li>\n</ul>\n<p><strong>预期效果</strong> ：开发者在 AI 编程工具里配置上 DeepWiki 与 Context7 两个 MCP 之后，在开发过程中询问到 AI 关于 CKB 的问题，AI 可以通过查询 MCP 服务器拿到更准确的答案。</p>\n<h3><a name=\"p-24150-h-52-p0-docsnervosorg-llmstxt-38\" class=\"anchor\" href=\"#p-24150-h-52-p0-docsnervosorg-llmstxt-38\" aria-label=\"Heading link\"></a>5.2 【P0】为 <a href=\"http://docs.nervos.org\">docs.nervos.org</a> 添加 llms.txt</h3>\n<p><strong>目标</strong> ：让所有 AI 工具能高效、准确地获取 CKB 官方文档。</p>\n<p><strong>具体行动</strong> ：</p>\n<ul>\n<li>在 <code>docs.nervos.org</code> 根目录添加 <code>llms.txt</code> （结构化目录 + 关键链接）。</li>\n<li>添加 <code>llms-full.txt</code> （完整文档的 Markdown 版本）。</li>\n<li>内容应覆盖：\n<ul>\n<li>CKB 核心概念（Cell 模型、Script、Transaction）。</li>\n<li>开发入门指南。</li>\n<li>SDK 使用指南（CCC / Rust SDK / Go SDK / Java SDK…）。</li>\n<li>RPC 参考。</li>\n<li>常见错误与解决方案。</li>\n<li>部署指南。</li>\n</ul>\n</li>\n</ul>\n<p><strong>参考实现</strong> ：</p>\n<ul>\n<li>Vercel docs: <a href=\"https://vercel.com/docs/llms-full.txt\">https://vercel.com/docs/llms-full.txt</a></li>\n<li>Anthropic docs 使用 Mintlify 生成 llms.txt</li>\n<li>github 仓库提交到context7 之后，Context7 会自动生成llms.txt文档，可将其复制放入docs.nervos.org站点中。</li>\n</ul>\n<p><strong>预期效果</strong> ：所有 AI 编程工具在搜索 CKB 相关信息时，均可通过 llms.txt 快速获取最新、准确的官方文档，大幅减少幻觉。</p>\n<h3><a name=\"p-24150-h-53-p0-ai-cursorrules-claudemd-skillmd-39\" class=\"anchor\" href=\"#p-24150-h-53-p0-ai-cursorrules-claudemd-skillmd-39\" aria-label=\"Heading link\"></a>5.3 【P0】创建 AI 配置文件模板（.cursorrules / CLAUDE.md / SKILL.md）</h3>\n<p><strong>目标</strong> ： 让使用任何 AI 工具的开发者在创建 CKB 项目时即获得 CKB 特定上下文</p>\n<p><strong>具体行动</strong> ：创建一套标准的 AI 配置文件模板，覆盖所有主流工具：：</p>\n<pre><code class=\"lang-auto\">ckb-project-template/\n├── .cursorrules              # Cursor\n├── .windsurfrules            # Windsurf\n├── .github/\n│   └── copilot-instructions.md  # GitHub Copilot\n├── CLAUDE.md                 # Claude Code\n├── AGENTS.md                 # OpenAI Codex\n└── .cursor/\n    └── rules/                # Cursor Rules (新版)\n</code></pre>\n<p>配置文件核心内容应包括：CKB Cell 模型说明（强调与 EVM 的区别）、CCC SDK 推荐用法、Transaction 构建模式、常见错误及解决方案、重要参考链接等。</p>\n<p>分发渠道：内置到 <code>create-ccc-app</code> 模板和 <code>offckb create</code> 命令，在官方文档站提供下载，在 GitHub 仓库 README 中醒目引导。</p>\n<h3><a name=\"p-24150-h-54-p0-ckb-mcp-40\" class=\"anchor\" href=\"#p-24150-h-54-p0-ckb-mcp-40\" aria-label=\"Heading link\"></a>5.4 【P0】构建 CKB 专属 MCP 服务器</h3>\n<p><strong>目标</strong> ：提供 CKB 生态的结构化、实时、可查询的文档服务，并使 AI Agent 能够查询 CKB 链上数据。</p>\n<p><strong>现状</strong> ：</p>\n<ul>\n<li>目前由 <a class=\"mention\" href=\"/u/jm9k\">@jm9k</a> 牵头实施的项目：<a href=\"https://github.com/sonami-tech/ckb-mcp\">github.com/sonami-tech/ckb-mcp</a>。</li>\n<li>使用 Rust 编写，master 分支（主分支）最近一次更新在 8 个月前，develop 分支最近一次更新在两个月前。</li>\n<li>README 推荐搭配 Claude 使用，但使用 Windsurf 实测时发现兼容性不佳，会报授权错误：<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/2/288bd815d83133f33dc292a86a9a6b35c9d68882.jpeg\" data-download-href=\"https://talk.nervos.org/uploads/default/288bd815d83133f33dc292a86a9a6b35c9d68882\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/2/288bd815d83133f33dc292a86a9a6b35c9d68882_2_690x368.jpeg\" alt=\"image\" data-base62-sha1=\"5MGIefxz7PcAWl9VNU2rz4DAdtU\" width=\"690\" height=\"368\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/2/288bd815d83133f33dc292a86a9a6b35c9d68882_2_690x368.jpeg, https://talk.nervos.org/uploads/default/optimized/2X/2/288bd815d83133f33dc292a86a9a6b35c9d68882_2_1035x552.jpeg 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/2/288bd815d83133f33dc292a86a9a6b35c9d68882_2_1380x736.jpeg 2x\" data-dominant-color=\"262726\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1920×1026 317 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></li>\n</ul>\n<p><strong>具体行动</strong> ：</p>\n<ul>\n<li><strong>大而全路线</strong> ：基于 <code>@modelcontextprotocol/sdk</code> （TypeScript）构建 CKB MCP 服务器，内容方面参考 Jordan 的 ckb-mcp 项目实现，使用 TypeScript 语言可以降低贡献者的门槛。优点：开发者仅需安装一个 MCP；难点：MCP 开发人员需具备多面手能力，或需多团队协同。</li>\n<li><strong>小而专路线</strong> ：针对特定领域创建专门的 MCP，如 docs、rpc、ccc、script 等。优点：MCP server 职责明确，可由特定团队/人员负责编写与维护；缺点：对开发者不够友好，需安装多个 MCP 服务。</li>\n</ul>\n<p><strong>技术实现参考</strong> ：</p>\n<ul>\n<li>基于 <code>@modelcontextprotocol/sdk</code> 构建 TypeScript MCP 服务器。</li>\n<li>后端使用 RAG 管道，向量化所有 CKB 文档。</li>\n<li>部署为公共服务，开发者只需在 IDE 配置中添加 MCP 服务器地址即可。</li>\n</ul>\n<p><strong>参考案例</strong> ：</p>\n<ul>\n<li><a href=\"https://github.com/solana-foundation/solana-mcp-official\">solana-mcp-official</a></li>\n<li><a href=\"https://github.com/bnb-chain/bnbchain-mcp\">bnbchain-mcp</a></li>\n</ul>\n<h3><a name=\"p-24150-h-55-p1-script-ai-41\" class=\"anchor\" href=\"#p-24150-h-55-p1-script-ai-41\" aria-label=\"Heading link\"></a>5.5 【P1】增强 Script 开发在 AI 平台上的覆盖</h3>\n<p><strong>目标</strong> ：让 AI 能准确指导 CKB Script（智能合约）开发。</p>\n<p><strong>具体行动</strong> ：</p>\n<ul>\n<li><strong>充实 ckb-std 文档</strong> ：当前在 Context7 上仅有 2 个代码片段，需大幅扩充：\n<ul>\n<li>常用 syscall 使用示例（load_cell_data、load_script 等）。</li>\n<li>完整的 Script 开发从零到部署教程。</li>\n<li>常见 Script 模式（lock script / type script / 多签等）。</li>\n</ul>\n</li>\n<li><strong>创建 Script 示例仓库</strong> ：专门存放各类 Script 示例，提交至 DeepWiki 与 Context7。</li>\n<li><strong>提升 ckb-std 的 README 与文档质量</strong> ：AI 平台会从 GitHub 仓库的 README、docs 目录、代码注释中提取信息，需确保这些内容丰富且结构化。</li>\n</ul>\n<h3><a name=\"p-24150-h-56-p1-ai-42\" class=\"anchor\" href=\"#p-24150-h-56-p1-ai-42\" aria-label=\"Heading link\"></a>5.6 【P1】优化现有仓库的 AI 可消费性</h3>\n<p><strong>目标</strong> ：让 DeepWiki 和 Context7 能提取更多、更高质量的信息。</p>\n<p><strong>具体行动</strong> ：</p>\n<ol>\n<li><strong>增强 README 文档</strong> ：</li>\n</ol>\n<ul>\n<li>每个仓库的 README 应包含“Quick Start”章节，提供可复制粘贴的代码示例。</li>\n<li>添加常见用例的代码片段。</li>\n<li>确保所有公共 API 均配有文档注释。</li>\n</ul>\n<ol start=\"2\">\n<li><strong>增加代码示例文件</strong> ：</li>\n</ol>\n<ul>\n<li>在仓库中创建 <code>/examples</code> 或 <code>/docs/examples</code> 目录。</li>\n<li>每个示例都应独立可运行。</li>\n<li>覆盖最常见的开发场景。</li>\n</ul>\n<ol start=\"3\">\n<li><strong>规范化代码注释</strong> ：</li>\n</ol>\n<ul>\n<li>使用 JSDoc/TSDoc（TypeScript）或 rustdoc（Rust）格式。</li>\n<li>确保所有公共方法均包含参数说明与返回值说明。</li>\n<li>AI 平台会从代码注释中提取 API 用法信息。</li>\n</ul>\n<ol start=\"4\">\n<li><strong>确保重要仓库均已提交 DeepWiki 和 Context7</strong></li>\n<li><strong>提升 Context7 上的每个Library的benchmark分数</strong></li>\n</ol>\n<ul>\n<li>benchmark 分数越高，AI获取到的内容越准确</li>\n</ul>\n<h3><a name=\"p-24150-h-57-p1-awesome-ckb-ai-awesome-solana-ai-43\" class=\"anchor\" href=\"#p-24150-h-57-p1-awesome-ckb-ai-awesome-solana-ai-43\" aria-label=\"Heading link\"></a>5.7 【P1】创建 awesome-ckb-ai 策展仓库（参考 awesome-solana-ai）</h3>\n<p><strong>目标</strong> ：建立 CKB AI 开发者生态的统一入口，向开发者传达 CKB 认真对待 AI First 战略的信号。</p>\n<p><strong>具体行动</strong> ：</p>\n<ul>\n<li>在 GitHub 上创建 <code>ckb-devrel/awesome-ckb-ai</code> 仓库。</li>\n<li>分类收录：\n<ul>\n<li><strong>AI Coding Skills</strong> ：CKB 开发的 .cursorrules / <a href=\"http://claude.md/\">CLAUDE.md</a> / <a href=\"http://skill.md/\">SKILL.md</a>。</li>\n<li><strong>MCP Servers</strong> ：CKB 官方 MCP + 社区 MCP。</li>\n<li><strong>Developer Tools</strong> ：AI 增强的 CKB 开发工具。</li>\n<li><strong>Learning Resources</strong> ：AI 友好的教程与文档。</li>\n<li><strong>AI Agents</strong> ：基于 CKB 的 AI Agent 项目。</li>\n</ul>\n</li>\n<li>设立社区贡献机制，鼓励生态开发者提交 PR。</li>\n</ul>\n<p><strong>参考</strong> ：<a href=\"https://github.com/solana-foundation/awesome-solana-ai\">awesome-solana-ai</a></p>\n<p><strong>注</strong> ：此项虽重要，但考虑到目前 CKB 上 AI 相关资源尚少，建议待资源丰富一些后再行实施。</p>\n<h3><a name=\"p-24150-h-58-p2-ai-44\" class=\"anchor\" href=\"#p-24150-h-58-p2-ai-44\" aria-label=\"Heading link\"></a>5.8 【P2】建立 AI 友好的错误信息知识库</h3>\n<p><strong>目标</strong> ：当开发者遇到 CKB 错误并向 AI 求助时，AI 能给出准确的诊断与解决方案。</p>\n<p><strong>具体行动</strong> ：</p>\n<ul>\n<li>收集 CKB 开发中最常见的 50+ 条错误信息。</li>\n<li>为每条错误创建结构化文档（错误码 → 原因 → 解决方案 → 代码示例）。</li>\n<li>格式化为 Markdown，便于 AI 消费。</li>\n<li>集成至 MCP 服务器的 <code>explain-error-code</code> 工具中。</li>\n<li>同步至 <a href=\"http://docs.nervos.org/\">docs.nervos.org</a> 的 llms.txt 中。</li>\n</ul>\n<p><strong>示例格式</strong> ：</p>\n<pre data-code-wrap=\"markdown\"><code class=\"lang-markdown\">## Error: TransactionFailedToVerify - ValidationFailure(-31)\n\n**含义**: Script 验证失败，group index 指向的 Script 执行返回非零退出码。\n\n**常见原因**:\n1. Lock Script 签名验证失败 — 使用了错误的私钥或地址。\n2. Type Script 逻辑错误 — 自定义 type script 的业务逻辑未满足。\n3. 容量不足 — 输出 Cell 的 capacity 不足以覆盖其占用空间。\n\n**调试步骤**:\n1. 使用 `ckb-debugger` 本地运行 Script 进行调试。\n2. 检查 `tx.witnesses` 是否已正确设置。\n3. 确认所有输入 Cell 的 lock script 对应的签名均已提供。\n\n**代码修复示例**:\n...\n</code></pre>\n<h3><a name=\"p-24150-h-59-p2-ai-45\" class=\"anchor\" href=\"#p-24150-h-59-p2-ai-45\" aria-label=\"Heading link\"></a>5.9 【P2】创建 AI 友好的端到端教程</h3>\n<p><strong>目标</strong> ：提供完整的、AI 可消费的教程内容。</p>\n<p><strong>具体行动</strong> ：</p>\n<ul>\n<li>将现有教程转换为 Markdown 格式，存放于专门的 GitHub 仓库中。</li>\n<li>教程应覆盖：\n<ul>\n<li>CKB 基础：创建钱包、CKB 转账。</li>\n<li>进阶：部署 Script、创建 UDT、使用 Spore Protocol。</li>\n<li>集成：连接各类钱包（MetaMask/JoyID/UniSat 等）。</li>\n<li>高级：DAO 操作、跨链桥接。</li>\n</ul>\n</li>\n<li>每个教程均应提供可直接运行的完整代码。</li>\n<li>提交至 DeepWiki 与 Context7。</li>\n</ul>\n<p><strong>参考示例</strong>：<a href=\"https://github.com/sporeprotocol/dob-cookbook/\">dob-cookbook</a></p>\n<h3><a name=\"p-24150-h-510-p2-faq-46\" class=\"anchor\" href=\"#p-24150-h-510-p2-faq-46\" aria-label=\"Heading link\"></a>5.10 【P2】结构化社群 FAQ</h3>\n<p><strong>目标</strong> ：将 Discord/Telegram 中反复出现的问题转化为 AI 可检索的知识。</p>\n<p><strong>具体行动</strong> ：</p>\n<ul>\n<li>定期从社区中收集高频问题。</li>\n<li>创建结构化 FAQ 文档（GitHub 仓库）。</li>\n<li>格式化为 Q&amp;A 对，便于 AI RAG 检索。</li>\n<li>可考虑借助 AI 自动从社群消息中提取与归类问题。</li>\n</ul>\n<h3><a name=\"p-24150-h-511-p2-ckb-ai-solana-ai-47\" class=\"anchor\" href=\"#p-24150-h-511-p2-ckb-ai-solana-ai-47\" aria-label=\"Heading link\"></a>5.11 【P2】创建 CKB AI 开发者指南（参考 Solana 官方 AI 指南）</h3>\n<p><strong>目标</strong> ：教会开发者如何更好地利用 AI 进行 CKB 开发。</p>\n<p><strong>具体行动</strong> ：</p>\n<ul>\n<li>撰写“如何用 AI 高效开发 CKB 应用”指南。</li>\n<li>内容包括：\n<ul>\n<li>推荐的 AI 工具配置（如何配置 Context7 MCP、DeepWiki MCP）。</li>\n<li>如何写好 CKB 相关的 AI 提示词。</li>\n<li>CKB 项目的 AI 配置文件模板使用指南。</li>\n<li>常见 AI 幻觉陷阱及如何避免。</li>\n</ul>\n</li>\n</ul>\n<h2><a name=\"p-24150-h-6-48\" class=\"anchor\" href=\"#p-24150-h-6-48\" aria-label=\"Heading link\"></a>6. 实施路线图</h2>\n<h3><a name=\"p-24150-h-61-phase-1-49\" class=\"anchor\" href=\"#p-24150-h-61-phase-1-49\" aria-label=\"Heading link\"></a>6.1 Phase 1 — 低成本高收益（追平基线）</h3>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>#</th>\n<th>行动项</th>\n<th>参考对标</th>\n<th>预期影响</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>1</td>\n<td>确保所有关键仓库已提交至 DeepWiki 与 Context7</td>\n<td>—</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n</tr>\n<tr>\n<td>2</td>\n<td>为 <a href=\"http://docs.nervos.org/\">docs.nervos.org</a> 添加 llms.txt / llms-full.txt</td>\n<td><a href=\"http://wagmi.sh/\">wagmi.sh</a> / <a href=\"http://viem.sh/\">viem.sh</a></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n</tr>\n<tr>\n<td>3</td>\n<td>创建 AI 配置文件模板（.cursorrules / <a href=\"http://claude.md/\">CLAUDE.md</a> / <a href=\"http://skill.md/\">SKILL.md</a>）</td>\n<td>Solana Skills 生态</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n</tr>\n<tr>\n<td>4</td>\n<td>创建 awesome-ckb-ai 策展仓库</td>\n<td>awesome-solana-ai</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"> 目前 AI 相关资源有限（可先收集素材后期进行）</td>\n</tr>\n</tbody>\n</table>\n</div><h3><a name=\"p-24150-h-62-phase-2-50\" class=\"anchor\" href=\"#p-24150-h-62-phase-2-50\" aria-label=\"Heading link\"></a>6.2 Phase 2 — 内容增强（缩小差距）</h3>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>#</th>\n<th>行动项</th>\n<th>参考对标</th>\n<th>预期影响</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>5</td>\n<td>充实重要仓库文档与示例</td>\n<td>viem 5800+ 片段</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n</tr>\n<tr>\n<td>6</td>\n<td>优化各仓库 README 与代码注释</td>\n<td>viem/wagmi 文档质量</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n</tr>\n<tr>\n<td>7</td>\n<td>创建错误信息知识库</td>\n<td></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n</tr>\n</tbody>\n</table>\n</div><h3><a name=\"p-24150-phase-312-51\" class=\"anchor\" href=\"#p-24150-phase-312-51\" aria-label=\"Heading link\"></a>Phase 3（1–2 月）— 基础设施建设（建立竞争力）</h3>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>#</th>\n<th>行动项</th>\n<th>参考对标</th>\n<th>预期影响</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>8</td>\n<td>构建 CKB 官方 MCP 服务器（<a href=\"https://mcp.ckb.dev/\">mcp.ckb.dev</a>）</td>\n<td><a href=\"http://mcp.solana.com/\">mcp.solana.com</a></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n</tr>\n<tr>\n<td>9</td>\n<td>创建 AI 友好的端到端教程</td>\n<td>Solana AI 开发指南</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n</tr>\n<tr>\n<td>10</td>\n<td>编写 CKB AI 开发者指南</td>\n<td><a href=\"http://solana.com/developers/guides/getstarted/intro-to-ai\">solana.com/developers/guides/getstarted/intro-to-ai</a></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n</tr>\n<tr>\n<td>11</td>\n<td>结构化开发者社区 FAQ</td>\n<td></td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji only-emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji only-emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji only-emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n</tr>\n</tbody>\n</table>\n</div><h3><a name=\"p-24150-phase-4-52\" class=\"anchor\" href=\"#p-24150-phase-4-52\" aria-label=\"Heading link\"></a>Phase 4（持续）— 生态维护与扩展</h3>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>#</th>\n<th>行动项</th>\n<th>参考对标</th>\n<th>预期影响</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>12</td>\n<td>定期更新所有 AI 平台上的内容</td>\n<td>—</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n</tr>\n<tr>\n<td>13</td>\n<td>监控 AI 工具对 CKB 信息的回答质量</td>\n<td>—</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji only-emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji only-emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji only-emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n</tr>\n<tr>\n<td>14</td>\n<td>鼓励生态伙伴构建 AI 工具（类似 Helius 之于 Solana）</td>\n<td>Helius AI 全家桶</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n</tr>\n<tr>\n<td>15</td>\n<td>探索 CKB Agent Kit（连接 AI Agent 到 CKB 协议）</td>\n<td>Solana Agent Kit</td>\n<td><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://talk.nervos.org/images/emoji/apple/star.png?v=15\" title=\":star:\" class=\"emoji\" alt=\":star:\" loading=\"lazy\" width=\"20\" height=\"20\"></td>\n</tr>\n</tbody>\n</table>\n</div><h2><a name=\"p-24150-h-53\" class=\"anchor\" href=\"#p-24150-h-53\" aria-label=\"Heading link\"></a>写在最后</h2>\n<p>这份报告的逻辑起点其实很朴素： AI 已悄然成为开发者与陌生技术之间最重要的那层中介。如果 AI 不理解 CKB，开发者也就很难顺畅地上手 CKB。</p>\n<p>好在，这是一个可以被系统性解决的问题。Solana 在一年多前面临着几乎一样的处境，他们通过官方 MCP、Skills 生态和 llms.txt 等一系列投入，明显改善了 AI 对 Solana 的理解质量。CKB 完全可以走同样的路径，而且起点并不低——DeepWiki 和 Context7 的基础覆盖已经到位，<a href=\"https://bolt.new/\">bolt.new</a> 上的实测也验证了基础场景已能跑通。</p>\n<p>Phase 1 的三件事（补全仓库索引、AI 配置文件模板、llms.txt）的工作量预计不到两周，但能让每一个用 Cursor 或 Claude Code 开发 CKB 的人立即受益。这里是一个很不错的开始。</p>\n<p>值得一提的是，这份调研报告本身就是与 AI 协同完成的：我们让 Gemini Pro 和 Claude Sonnet 两个模型交叉验证了 CKB 在 AI 编程方面的缺口与行动计划，因为 AI 天然更清楚自己还“缺什么”。在整个过程中，<a class=\"mention\" href=\"/u/hanssen\">@Hanssen</a> 和 <a class=\"mention\" href=\"/u/retricsu\">@RetricSu</a> 以及几位来自 DevRel 团队的小伙伴为报告提供了不可或缺的校准与反馈，在此深表感谢。</p>\n<p><strong>最后，想邀请你一起参与两件事：</strong></p>\n<ul>\n<li><strong>如果你正在用 AI 开发 CKB 应用</strong> ，欢迎在评论区分享你遇到的 AI 幻觉案例——那些 AI 一本正经地胡说八道的真实瞬间。每一个例子都会直接帮助我们把 CKB 的 AI 开发体验打磨得更准确。</li>\n<li><strong>如果你发现报告中还有遗漏的 AI 相关实践或方向</strong> ，也欢迎在评论区留言斧正，我们会持续跟踪迭代。</li>\n</ul>\n<p>感谢每一位读到这里的你。</p>\n<hr>\n<h2><a name=\"p-24150-h-54\" class=\"anchor\" href=\"#p-24150-h-54\" aria-label=\"Heading link\"></a>附录</h2>\n<h3><a name=\"p-24150-a-solana-ai-first-55\" class=\"anchor\" href=\"#p-24150-a-solana-ai-first-55\" aria-label=\"Heading link\"></a>A. Solana AI First 生态关键链接</h3>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>资源</th>\n<th>链接</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>Solana Developer MCP（官方）</td>\n<td><a href=\"https://mcp.solana.com/\">https://mcp.solana.com/</a></td>\n</tr>\n<tr>\n<td>solana-mcp-official (GitHub)</td>\n<td><a href=\"https://github.com/solana-foundation/solana-mcp-official\" class=\"inline-onebox\">GitHub - solana-foundation/solana-mcp-official · GitHub</a></td>\n</tr>\n<tr>\n<td>awesome-solana-ai</td>\n<td><a href=\"https://github.com/solana-foundation/awesome-solana-ai\" class=\"inline-onebox\">GitHub - solana-foundation/awesome-solana-ai: Public repo of AI tooling to help build on Solana · GitHub</a></td>\n</tr>\n<tr>\n<td>solana-dev-skill</td>\n<td><a href=\"https://github.com/solana-foundation/solana-dev-skill\" class=\"inline-onebox\">GitHub - solana-foundation/solana-dev-skill: Skills for agentic development on Solana (March 2026 best practices) · GitHub</a></td>\n</tr>\n<tr>\n<td>Solana AI 开发入门指南</td>\n<td><a href=\"https://solana.com/developers/guides/getstarted/intro-to-ai\">https://solana.com/developers/guides/getstarted/intro-to-ai</a></td>\n</tr>\n<tr>\n<td>Helius AI 工具</td>\n<td><a href=\"https://www.helius.dev/blog/how-to-use-ai-to-build-solana-apps\" class=\"inline-onebox\">How to Use AI to Build Solana Apps (2026)</a></td>\n</tr>\n<tr>\n<td>Helius MCP Server</td>\n<td><a href=\"https://www.helius.dev/docs/agents/mcp\" class=\"inline-onebox\">Helius MCP Server - Helius Docs</a></td>\n</tr>\n<tr>\n<td>Solana Agent Kit (SendAI)</td>\n<td><a href=\"https://github.com/sendaifun/solana-agent-kit\" class=\"inline-onebox\">GitHub - sendaifun/solana-agent-kit: connect any ai agents to solana protocols · GitHub</a></td>\n</tr>\n</tbody>\n</table>\n</div><h3><a name=\"p-24150-b-ethereum-ai-first-56\" class=\"anchor\" href=\"#p-24150-b-ethereum-ai-first-56\" aria-label=\"Heading link\"></a>B. Ethereum AI First 生态部分链接</h3>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>资源</th>\n<th>链接</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>wagmi llms.txt</td>\n<td><a href=\"https://wagmi.sh/llms.txt\">https://wagmi.sh/llms.txt</a></td>\n</tr>\n<tr>\n<td>wagmi llms-full.txt</td>\n<td><a href=\"https://wagmi.sh/llms-full.txt\">https://wagmi.sh/llms-full.txt</a></td>\n</tr>\n<tr>\n<td>viem 文档</td>\n<td><a href=\"https://viem.sh/\">https://viem.sh/</a></td>\n</tr>\n<tr>\n<td>evm-mcp-server</td>\n<td><a href=\"https://github.com/mcpdotdirect/evm-mcp-server\" class=\"inline-onebox\">GitHub - mcpdotdirect/evm-mcp-server: MCP server that provides LLMs with tools for interacting with EVM networks · GitHub</a></td>\n</tr>\n<tr>\n<td>Remix IDE AI Tools</td>\n<td><a href=\"https://remix-ide.readthedocs.io/en/latest/ai.html\" class=\"inline-onebox\">AI Tools — Remix - Ethereum IDE 1 documentation</a></td>\n</tr>\n<tr>\n<td>ETH Skills</td>\n<td><a href=\"https://ethskills.com/\">https://ethskills.com/</a></td>\n</tr>\n<tr>\n<td>Ethereum “Build onchain with agents”</td>\n<td><a href=\"https://ethereum.org/developers/\" class=\"inline-onebox\">Ethereum Developer Resources</a></td>\n</tr>\n<tr>\n<td>Ethereum AI Agents 策展</td>\n<td><a href=\"https://ethereum.org/ai-agents/\" class=\"inline-onebox\">AI agents | AI agents on Ethereum | ethereum.org</a></td>\n</tr>\n</tbody>\n</table>\n</div><h3><a name=\"p-24150-c-57\" class=\"anchor\" href=\"#p-24150-c-57\" aria-label=\"Heading link\"></a>C. 参考资料</h3>\n<ol>\n<li><a href=\"https://survey.stackoverflow.co/2025/ai/\">2025 Stack Overflow Developer Survey</a></li>\n<li><a href=\"https://dev.to/agentsindex/best-ai-coding-agents-9-tools-compared-for-every-developer-type-58lm\">Best AI Coding Agents: 9 Tools Compared</a></li>\n<li><a href=\"https://digidai.github.io/2026/02/08/cursor-vs-github-copilot-ai-coding-tools-deep-comparison/\">Cursor vs GitHub Copilot Deep Comparison</a></li>\n<li><a href=\"https://www.tokencentric.app/blog/ai-coding-config-files-compared\">AI Coding Config Files Compared</a></li>\n<li><a href=\"https://buildwithfern.com/post/mcp-servers-documentation-sites\">Why MCP Servers Are Essential for Documentation Sites</a></li>\n<li><a href=\"https://www.mintlify.com/blog/what-is-llms-txt\">What is llms.txt? Breaking Down the Skepticism</a></li>\n<li><a href=\"https://context7mcp.com/\">Context7 MCP Server Guide</a></li>\n<li><a href=\"https://www.helius.dev/blog/how-to-use-ai-to-build-solana-apps\">How to Use AI to Build Solana Apps (Helius)</a></li>\n<li><a href=\"https://github.com/solana-foundation/awesome-solana-ai\">awesome-solana-ai (Solana Foundation)</a></li>\n<li><a href=\"https://mcp.solana.com/\">Solana Developer MCP</a></li>\n</ol>",
          "like_count": 0,
          "quote_count": 0
        }
      ]
    },
    {
      "topic_id": 10230,
      "title": "基于 fiber-js 的 Demo 开发",
      "slug": "fiber-js-demo",
      "url": "https://talk.nervos.org/t/fiber-js-demo/10230",
      "created_at": "2026-05-06T06:45:31.692000+00:00",
      "last_posted_at": "2026-05-06T11:25:47.729000+00:00",
      "category_id": 36,
      "tags": [],
      "posters": [
        "Original Poster",
        "Most Recent Poster"
      ],
      "recent_posts": [
        {
          "post_id": 24144,
          "post_number": 1,
          "topic_id": 10230,
          "topic_title": "基于 fiber-js 的 Demo 开发",
          "topic_slug": "fiber-js-demo",
          "author": "Ckroamer",
          "created_at": "2026-05-06T06:45:31.757000+00:00",
          "updated_at": "2026-05-06T14:41:15.514000+00:00",
          "reply_to_post_number": null,
          "url": "https://talk.nervos.org/t/fiber-js-demo/10230/1",
          "content_text": "在进入主题之前，我觉得有必要先用较为通俗的语言描述一下 Fiber 当前的一些大多数用户可能会关心的功能设定：\n每一个 Fiber 节点都是 “节点 + 钱包” 的组合体，节点模块有自己的身份 ID，钱包模块则提供 CKB 链上资产的操作权限。\n让 Fiber 节点关联 CKB 钱包有两种方式，一种是你直接提供私钥文件并设置一个解锁密码，以后都要提供正确的密码才能启动节点，这是内置钱包模式；另一种是节点发你 CKB 交易，你自己拿去签名，然后将签名信息发回节点即可，这是外置钱包模式。\n要自己在机器上运行 Fiber 节点，要么下载 Fiber 的源代码来编译可执行程序，要么直接下载已经编译好的可执行程序。\nFiber 团队还提供了 Fiber 的 Wasm 版本，也就是可以直接在浏览器里运行 Fiber 节点，这是目前能让非技术用户直接运行 Fiber 节点的最佳方式，但由于技术原因，这个版本砍掉了部分功能。\n在 Fiber 里你无法像 CKB 网络那样直接向某个地址转账，流程是需要目标节点先成一个收款 “发票”，然后你通过向这个发票付款来完成转账，也就是对方不想收款，那你就没法向他付款，并且交易双方的节点都要求同时在线并互相可感知。\nFiber 团队提供了 fiber-cli，支持像 ckb-cli 那样通过终端或命令行来与 Fiber 节点交互。\nFiber 目前还在持续开发当中，上述提到的技术细节都有可能在未来的版本迭代中发生改变，但大部分都集中在技术细节优化和提供新功能上，对基础的技术框架一般不会做太大改动。\nFiber 的技术优势主要体现在 “高频交易” 场景中，尤其是小额且高频的，对于大额或低频的交互场景，其优势就很难发挥出来。\n什么是 fiber-js\nhttps://www.npmjs.com/package/@nervosnetwork/fiber-js\n一个封装了 Fiber Wasm 版本的 TypeScript 开发包，将 Rust 版本的 Fiber 删减一些功能后编译到 Wasm 二进制，再通过一些基本的封装工作将其封装起来并提供一个简单好用的编程接口。\n我们可以通过 fiber-js 直接在浏览器里运行一个阉割版的 Fiber 节点，由于 Fiber 现已支持外置钱包模式，所以可以不用传入你的钱包私钥，直接使用常用的 CKB 钱包，比如 JoyID、UTXOGlobal 等。\n当前版本（v0.8.1）的 fiber-js 受限于浏览器运行环境，有以下几个特点：\n没有 RPC 服务，毕竟浏览器里也没法运行服务器\n没有 RocksDB 数据库，直接使用浏览器中内置的 IndexedDB 服务\n没有 WatchTower 服务，无法实时侦测异常操作\n只能通过 443/WSS 协议连接 Rust 版本的 Fiber 节点，WASM 版本的 Fiber 节点无法被主动连接\nfiber-js 的应用场景\n受限于浏览器运行环境的限制，集成了 fiber-js 的 Web3 应用程序无法针对 P2P 场景进行业务构建，例如 P2P 对战、P2P 聊天、P2P 支付等，但 CS 场景是没问题的，即基于 Client 与 Server 的业务模式，比如按流量扣费的在线视频平台、按 Token 计费的 AI Agent 服务等。\nFiber 为了自身网络信息传输的稳定性与有效性，会尽可能的期望节点是常驻运行的，而这一点 fiber-js 所涉及的应用场景都是很难做到的，因为很难想象一个网页会长时间打开且不关闭的情况，所以目前可以这么理解：\nRust 版本的 Fiber 同时针对 Client + Server 应用场景，但缺点是需要显式的运行一个 Fiber 节点\nWasm 版本的 Fiber 只针对 Client 应用场景，但优点是支持直接在浏览器里隐式的运行一个 Fiber 节点，让普通用户对 Fiber 节点的存在无感\n基于 fiber-js 的 demo\n该 Demo 是直接集成在 CCC 里的，用于全面的展示 fiber-js 所提供的大部分功能，包括连接节点、创建通道、生成发票和发送支付等，直接使用该 demo 来测试 Fiber 的相关操作流程，不过受限于 fiber-js 的限制，无法体验完全版的 Fiber 功能。\nDemo 功能简介：\n节点配置：将 Fiber 配置文件内容整个复制到文本框中\nimage1920×797 156 KB\n生成 Fiber Key：通过签名一个自定义字符串来生成 Fiber 的节点身份信息\nimage2048×719 127 KB\n运行 Fiber 节点：使用外置钱包模式启动 Fiber 服务，可查看操作日志\nimage1920×1051 160 KB\nimage1920×595 387 KB\n建立 Channel 连接：与直连的一个 Peer 建立 Channel\nimage2048×844 107 KB\n生成 Invoice 与支付：在通道内生成 Invoice 并完成支付\nimage2048×936 97.4 KB\nimage2780×510 46.1 KB\n注：此 Demo 会跟着 CCC 的版本更新来发布\n关于 Fiber 的一些个人建议\n我在开发 Demo 的过程中，积累了一些使用 Fiber 的经验和感受，Fiber 是一个很有潜力的项目，并且仍在紧锣密鼓的开发当中，我希望就我观察到的一些问题进行探讨，起到抛砖引玉的效果。\nFiber 的官网设计还是以面向开发者群体为主，忽略了非技术群体的感受\n官网上充斥着大量的技术性说明与介绍，生态项目板块指向也都是项目的 Github 地址，而不是项目的使用入口或者演示视频，非技术群体用户无法直接通过官网建立对 Fiber 生态的感性认知，我觉得这一点是有待斟酌的。\n当前的设计固然有它的道理：尽可能的服务开发者，期望降低开发者在 Fiber 生态中开发应用的门槛。但这样的设计忽略了一个重要的现实情况，那就是应用开发出来后并没有为它们设置向普通非技术用户展示的途径，开发出来的 Fiber 应用很可能就此石沉大海，无法触及普通用户，也就无法对开发者形成正向激励。\n降低开发门槛就好像售前服务，开发完成后帮助向用户展示就好像售后服务，两者缺一不可，我们目前的主要精力都放在了售前服务上，却忽略了售后服务的重要性。\n运行 Fiber 节点的门槛较高\n几乎所有的 Fiber 应用都会要求用户在本地启动一个 Fiber 节点，但让非技术群体用户自己从 Github 上下载 Fiber 二进制文件并在命令行终端里进行配置和启动节点是一件门槛极高的操作。\n其实如果能提供一个桌面端 Fiber 节点应用并集成钱包功能，就像 Neuron 那样，打开就能直接启动节点，在应用里调整节点配置和调用 RPC 接口等，将极大的降低非技术类用户启动 Fiber 节点的门槛，如果有这样的桌面应用存在，那就可以直接参考传统区块链钱包的做法，集成一个 Fiber 应用市场，打造成 Fiber 生态的入口。\nfiber-js 的潜力还有更多可挖的空间\nfiber-js 运行着一个阉割版的 Fiber 节点，但其在浏览器中运行的特点，即带来了方便但也受限于不少技术问题，如果能够尽可能解决大部分技术问题，我相信 Wasm 版本的 Fiber 对开发者将更有吸引力。\n官方文档中缺少对 fiber-js 的相关介绍和引导，可能会导致一些开发者都不知道其存在的问题。",
          "content_html": "<p>在进入主题之前，我觉得有必要先用较为通俗的语言描述一下 Fiber 当前的一些大多数用户可能会关心的功能设定：</p>\n<ol>\n<li>\n<p>每一个 Fiber 节点都是 “节点 + 钱包” 的组合体，节点模块有自己的身份 ID，钱包模块则提供 CKB 链上资产的操作权限。</p>\n</li>\n<li>\n<p>让 Fiber 节点关联 CKB 钱包有两种方式，一种是你直接提供私钥文件并设置一个解锁密码，以后都要提供正确的密码才能启动节点，这是内置钱包模式；另一种是节点发你 CKB 交易，你自己拿去签名，然后将签名信息发回节点即可，这是外置钱包模式。</p>\n</li>\n<li>\n<p>要自己在机器上运行 Fiber 节点，要么下载 Fiber 的源代码来编译可执行程序，要么直接下载已经编译好的可执行程序。</p>\n</li>\n<li>\n<p>Fiber 团队还提供了 Fiber 的 Wasm 版本，也就是可以直接在浏览器里运行 Fiber 节点，这是目前能让非技术用户直接运行 Fiber 节点的最佳方式，但由于技术原因，这个版本砍掉了部分功能。</p>\n</li>\n<li>\n<p>在 Fiber 里你无法像 CKB 网络那样直接向某个地址转账，流程是需要目标节点先成一个收款 “发票”，然后你通过向这个发票付款来完成转账，也就是对方不想收款，那你就没法向他付款，并且交易双方的节点都要求同时在线并互相可感知。</p>\n</li>\n<li>\n<p>Fiber 团队提供了 <code>fiber-cli</code>，支持像 <code>ckb-cli</code> 那样通过终端或命令行来与 Fiber 节点交互。</p>\n</li>\n<li>\n<p>Fiber 目前还在持续开发当中，上述提到的技术细节都有可能在未来的版本迭代中发生改变，但大部分都集中在技术细节优化和提供新功能上，对基础的技术框架一般不会做太大改动。</p>\n</li>\n<li>\n<p>Fiber 的技术优势主要体现在 “高频交易” 场景中，尤其是小额且高频的，对于大额或低频的交互场景，其优势就很难发挥出来。</p>\n</li>\n</ol>\n<blockquote>\n<p><strong>什么是 fiber-js</strong></p>\n</blockquote>\n<p><a href=\"https://www.npmjs.com/package/@nervosnetwork/fiber-js\" rel=\"noopener nofollow ugc\">https://www.npmjs.com/package/@nervosnetwork/fiber-js</a></p>\n<p>一个封装了 Fiber Wasm 版本的 TypeScript 开发包，将 Rust 版本的 Fiber 删减一些功能后编译到 Wasm 二进制，再通过一些基本的封装工作将其封装起来并提供一个简单好用的编程接口。</p>\n<p>我们可以通过 <code>fiber-js</code> 直接在浏览器里运行一个阉割版的 Fiber 节点，由于 Fiber 现已支持外置钱包模式，所以可以不用传入你的钱包私钥，直接使用常用的 CKB 钱包，比如 JoyID、UTXOGlobal 等。</p>\n<p>当前版本（v0.8.1）的 <code>fiber-js</code> 受限于浏览器运行环境，有以下几个特点：</p>\n<ul>\n<li>\n<p>没有 RPC 服务，毕竟浏览器里也没法运行服务器</p>\n</li>\n<li>\n<p>没有 RocksDB 数据库，直接使用浏览器中内置的 IndexedDB 服务</p>\n</li>\n<li>\n<p>没有 WatchTower 服务，无法实时侦测异常操作</p>\n</li>\n<li>\n<p>只能通过 443/WSS 协议连接 Rust 版本的 Fiber 节点，WASM 版本的 Fiber 节点无法被主动连接</p>\n</li>\n</ul>\n<blockquote>\n<p><strong>fiber-js 的应用场景</strong></p>\n</blockquote>\n<p>受限于浏览器运行环境的限制，集成了 <code>fiber-js</code> 的 Web3 应用程序无法针对 P2P 场景进行业务构建，例如 P2P 对战、P2P 聊天、P2P 支付等，但 CS 场景是没问题的，即基于 Client 与 Server 的业务模式，比如按流量扣费的在线视频平台、按 Token 计费的 AI Agent 服务等。</p>\n<p>Fiber 为了自身网络信息传输的稳定性与有效性，会尽可能的期望节点是常驻运行的，而这一点 <code>fiber-js</code> 所涉及的应用场景都是很难做到的，因为很难想象一个网页会长时间打开且不关闭的情况，所以目前可以这么理解：</p>\n<ul>\n<li>\n<p>Rust 版本的 Fiber 同时针对 Client + Server 应用场景，但缺点是需要显式的运行一个 Fiber 节点</p>\n</li>\n<li>\n<p>Wasm 版本的 Fiber 只针对 Client 应用场景，但优点是支持直接在浏览器里隐式的运行一个 Fiber 节点，让普通用户对 Fiber 节点的存在无感</p>\n</li>\n</ul>\n<blockquote>\n<p><strong>基于 fiber-js 的 demo</strong></p>\n</blockquote>\n<p>该 Demo 是直接集成在 CCC 里的，用于全面的展示 <code>fiber-js</code> 所提供的大部分功能，包括连接节点、创建通道、生成发票和发送支付等，直接使用该 demo 来测试 Fiber 的相关操作流程，不过受限于 <code>fiber-js</code> 的限制，无法体验完全版的 Fiber 功能。</p>\n<p>Demo 功能简介：</p>\n<ul>\n<li>\n<p><strong>节点配置</strong>：将 Fiber 配置文件内容整个复制到文本框中</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/8/8f77ff2beee3f0ab9c8868a707fd5046fe4917d5.jpeg\" data-download-href=\"https://talk.nervos.org/uploads/default/8f77ff2beee3f0ab9c8868a707fd5046fe4917d5\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/8/8f77ff2beee3f0ab9c8868a707fd5046fe4917d5_2_690x286.jpeg\" alt=\"image\" data-base62-sha1=\"ktbmWSs6KGac0unnRqvDIZxG2xL\" width=\"690\" height=\"286\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/8/8f77ff2beee3f0ab9c8868a707fd5046fe4917d5_2_690x286.jpeg, https://talk.nervos.org/uploads/default/optimized/2X/8/8f77ff2beee3f0ab9c8868a707fd5046fe4917d5_2_1035x429.jpeg 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/8/8f77ff2beee3f0ab9c8868a707fd5046fe4917d5_2_1380x572.jpeg 2x\" data-dominant-color=\"F4F6F8\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1920×797 156 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n</li>\n<li>\n<p><strong>生成 Fiber Key</strong>：通过签名一个自定义字符串来生成 Fiber 的节点身份信息</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/8/8cd76cc05f72af95e6c728d1fae327329c669895.jpeg\" data-download-href=\"https://talk.nervos.org/uploads/default/8cd76cc05f72af95e6c728d1fae327329c669895\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/8/8cd76cc05f72af95e6c728d1fae327329c669895_2_690x242.jpeg\" alt=\"image\" data-base62-sha1=\"k5Wogvj9zB5LGJ2IipppQXYrO7j\" width=\"690\" height=\"242\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/8/8cd76cc05f72af95e6c728d1fae327329c669895_2_690x242.jpeg, https://talk.nervos.org/uploads/default/optimized/2X/8/8cd76cc05f72af95e6c728d1fae327329c669895_2_1035x363.jpeg 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/8/8cd76cc05f72af95e6c728d1fae327329c669895_2_1380x484.jpeg 2x\" data-dominant-color=\"E3E6E9\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">2048×719 127 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n</li>\n<li>\n<p><strong>运行 Fiber 节点</strong>：使用外置钱包模式启动 Fiber 服务，可查看操作日志</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/4/420b398868a1c5cc2711eae96c6f0302c6e010e3.jpeg\" data-download-href=\"https://talk.nervos.org/uploads/default/420b398868a1c5cc2711eae96c6f0302c6e010e3\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/4/420b398868a1c5cc2711eae96c6f0302c6e010e3_2_690x377.jpeg\" alt=\"image\" data-base62-sha1=\"9qfyQ5QyzGfC9DjfLUASJWbPTVx\" width=\"690\" height=\"377\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/4/420b398868a1c5cc2711eae96c6f0302c6e010e3_2_690x377.jpeg, https://talk.nervos.org/uploads/default/optimized/2X/4/420b398868a1c5cc2711eae96c6f0302c6e010e3_2_1035x565.jpeg 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/4/420b398868a1c5cc2711eae96c6f0302c6e010e3_2_1380x754.jpeg 2x\" data-dominant-color=\"F8F7F8\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1920×1051 160 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/2/2cce0903c61bc26ffe3a2a254dd683dee0699a59.jpeg\" data-download-href=\"https://talk.nervos.org/uploads/default/2cce0903c61bc26ffe3a2a254dd683dee0699a59\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/2/2cce0903c61bc26ffe3a2a254dd683dee0699a59_2_690x213.jpeg\" alt=\"image\" data-base62-sha1=\"6omr6frCz8beYDYmPXjIZQ87SXL\" width=\"690\" height=\"213\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/2/2cce0903c61bc26ffe3a2a254dd683dee0699a59_2_690x213.jpeg, https://talk.nervos.org/uploads/default/optimized/2X/2/2cce0903c61bc26ffe3a2a254dd683dee0699a59_2_1035x319.jpeg 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/2/2cce0903c61bc26ffe3a2a254dd683dee0699a59_2_1380x426.jpeg 2x\" data-dominant-color=\"2D4240\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1920×595 387 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n</li>\n<li>\n<p><strong>建立 Channel 连接</strong>：与直连的一个 Peer 建立 Channel</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/b/b4379cf0801ac12d39fe012a232ef0b4546dba03.jpeg\" data-download-href=\"https://talk.nervos.org/uploads/default/b4379cf0801ac12d39fe012a232ef0b4546dba03\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/b/b4379cf0801ac12d39fe012a232ef0b4546dba03_2_690x284.jpeg\" alt=\"image\" data-base62-sha1=\"pIh5Tu9irq733QJIaRzUNCcMPyX\" width=\"690\" height=\"284\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/b/b4379cf0801ac12d39fe012a232ef0b4546dba03_2_690x284.jpeg, https://talk.nervos.org/uploads/default/optimized/2X/b/b4379cf0801ac12d39fe012a232ef0b4546dba03_2_1035x426.jpeg 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/b/b4379cf0801ac12d39fe012a232ef0b4546dba03_2_1380x568.jpeg 2x\" data-dominant-color=\"FAFAFB\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">2048×844 107 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n</li>\n<li>\n<p><strong>生成 Invoice 与支付</strong>：在通道内生成 Invoice 并完成支付</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/8/84116e4552648574aa29914a076b86bc663d991c.jpeg\" data-download-href=\"https://talk.nervos.org/uploads/default/84116e4552648574aa29914a076b86bc663d991c\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/8/84116e4552648574aa29914a076b86bc663d991c_2_690x315.jpeg\" alt=\"image\" data-base62-sha1=\"iQkn1gTtofAjivJzYMbpxZIZ3By\" width=\"690\" height=\"315\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/8/84116e4552648574aa29914a076b86bc663d991c_2_690x315.jpeg, https://talk.nervos.org/uploads/default/optimized/2X/8/84116e4552648574aa29914a076b86bc663d991c_2_1035x472.jpeg 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/8/84116e4552648574aa29914a076b86bc663d991c_2_1380x630.jpeg 2x\" data-dominant-color=\"F9F9FA\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">2048×936 97.4 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/2/29671d7795747f3b4bd29d869f30f723f279e297.png\" data-download-href=\"https://talk.nervos.org/uploads/default/29671d7795747f3b4bd29d869f30f723f279e297\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/2/29671d7795747f3b4bd29d869f30f723f279e297_2_690x126.png\" alt=\"image\" data-base62-sha1=\"5UguXDcf0G8e6i5EMTqDoOVzgYn\" width=\"690\" height=\"126\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/2/29671d7795747f3b4bd29d869f30f723f279e297_2_690x126.png, https://talk.nervos.org/uploads/default/optimized/2X/2/29671d7795747f3b4bd29d869f30f723f279e297_2_1035x189.png 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/2/29671d7795747f3b4bd29d869f30f723f279e297_2_1380x252.png 2x\" data-dominant-color=\"F7F9FB\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">2780×510 46.1 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n</li>\n</ul>\n<p><em>注：此 Demo 会跟着 CCC 的版本更新来发布</em></p>\n<h2><a name=\"p-24144-fiber-1\" class=\"anchor\" href=\"#p-24144-fiber-1\" aria-label=\"Heading link\"></a>关于 Fiber 的一些个人建议</h2>\n<p>我在开发 Demo 的过程中，积累了一些使用 Fiber 的经验和感受，Fiber 是一个很有潜力的项目，并且仍在紧锣密鼓的开发当中，我希望就我观察到的一些问题进行探讨，起到抛砖引玉的效果。</p>\n<blockquote>\n<p><strong>Fiber 的官网设计还是以面向开发者群体为主，忽略了非技术群体的感受</strong></p>\n</blockquote>\n<p>官网上充斥着大量的技术性说明与介绍，生态项目板块指向也都是项目的 Github 地址，而不是项目的使用入口或者演示视频，非技术群体用户无法直接通过官网建立对 Fiber 生态的感性认知，我觉得这一点是有待斟酌的。</p>\n<p>当前的设计固然有它的道理：尽可能的服务开发者，期望降低开发者在 Fiber 生态中开发应用的门槛。但这样的设计忽略了一个重要的现实情况，那就是应用开发出来后并没有为它们设置向普通非技术用户展示的途径，开发出来的 Fiber 应用很可能就此石沉大海，无法触及普通用户，也就无法对开发者形成正向激励。</p>\n<p>降低开发门槛就好像售前服务，开发完成后帮助向用户展示就好像售后服务，两者缺一不可，我们目前的主要精力都放在了售前服务上，却忽略了售后服务的重要性。</p>\n<blockquote>\n<p><strong>运行 Fiber 节点的门槛较高</strong></p>\n</blockquote>\n<p>几乎所有的 Fiber 应用都会要求用户在本地启动一个 Fiber 节点，但让非技术群体用户自己从 Github 上下载 Fiber 二进制文件并在命令行终端里进行配置和启动节点是一件门槛极高的操作。</p>\n<p>其实如果能提供一个桌面端 Fiber 节点应用并集成钱包功能，就像 Neuron 那样，打开就能直接启动节点，在应用里调整节点配置和调用 RPC 接口等，将极大的降低非技术类用户启动 Fiber 节点的门槛，如果有这样的桌面应用存在，那就可以直接参考传统区块链钱包的做法，集成一个 Fiber 应用市场，打造成 Fiber 生态的入口。</p>\n<blockquote>\n<p><strong><code>fiber-js</code> 的潜力还有更多可挖的空间</strong></p>\n</blockquote>\n<p><code>fiber-js</code> 运行着一个阉割版的 Fiber 节点，但其在浏览器中运行的特点，即带来了方便但也受限于不少技术问题，如果能够尽可能解决大部分技术问题，我相信 Wasm 版本的 Fiber 对开发者将更有吸引力。</p>\n<p>官方文档中缺少对 <code>fiber-js</code> 的相关介绍和引导，可能会导致一些开发者都不知道其存在的问题。</p>",
          "like_count": 0,
          "quote_count": 0
        },
        {
          "post_id": 24149,
          "post_number": 2,
          "topic_id": 10230,
          "topic_title": "基于 fiber-js 的 Demo 开发",
          "topic_slug": "fiber-js-demo",
          "author": "Thinker",
          "created_at": "2026-05-06T11:25:47.729000+00:00",
          "updated_at": "2026-05-06T11:25:47.729000+00:00",
          "reply_to_post_number": null,
          "url": "https://talk.nervos.org/t/fiber-js-demo/10230/2",
          "content_text": "Ckroamer:\n当前的设计固然有它的道理：尽可能的服务开发者，期望降低开发者在 Fiber 生态中开发应用的门槛。但这样的设计忽略了一个重要的现实情况，那就是应用开发出来后并没有为它们设置向普通非技术用户展示的途径，开发出来的 Fiber 应用很可能就此石沉大海，无法触及普通用户，也就无法对开发者形成正向激励。\n降低开发门槛就好像售前服务，开发完成后帮助向用户展示就好像售后服务，两者缺一不可，我们目前的主要精力都放在了售前服务上，却忽略了售后服务的重要性。",
          "content_html": "<aside class=\"quote no-group\" data-username=\"Ckroamer\" data-post=\"1\" data-topic=\"10230\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img alt=\"\" width=\"24\" height=\"24\" src=\"https://talk.nervos.org/letter_avatar_proxy/v4/letter/c/b77776/48.png\" class=\"avatar\"> Ckroamer:</div>\n<blockquote>\n<p>当前的设计固然有它的道理：尽可能的服务开发者，期望降低开发者在 Fiber 生态中开发应用的门槛。但这样的设计忽略了一个重要的现实情况，那就是应用开发出来后并没有为它们设置向普通非技术用户展示的途径，开发出来的 Fiber 应用很可能就此石沉大海，无法触及普通用户，也就无法对开发者形成正向激励。</p>\n<p>降低开发门槛就好像售前服务，开发完成后帮助向用户展示就好像售后服务，两者缺一不可，我们目前的主要精力都放在了售前服务上，却忽略了售后服务的重要性。</p>\n</blockquote>\n</aside>\n<p><img src=\"https://talk.nervos.org/images/emoji/apple/+1.png?v=15\" title=\":+1:\" class=\"emoji\" alt=\":+1:\" loading=\"lazy\" width=\"20\" height=\"20\">   <img src=\"https://talk.nervos.org/images/emoji/apple/+1.png?v=15\" title=\":+1:\" class=\"emoji\" alt=\":+1:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
          "like_count": 0,
          "quote_count": 1
        }
      ]
    },
    {
      "topic_id": 10210,
      "title": "CellScript Package and Deployment Registry: Early Design Discussion",
      "slug": "cellscript-package-and-deployment-registry-early-design-discussion",
      "url": "https://talk.nervos.org/t/cellscript-package-and-deployment-registry-early-design-discussion/10210",
      "created_at": "2026-04-24T11:27:03.381000+00:00",
      "last_posted_at": "2026-05-06T10:13:15.932000+00:00",
      "category_id": 49,
      "tags": [
        "CKB",
        "CKB-VM",
        "CellScript"
      ],
      "posters": [
        "Original Poster, Most Recent Poster",
        "Frequent Poster"
      ],
      "recent_posts": [
        {
          "post_id": 24148,
          "post_number": 5,
          "topic_id": 10210,
          "topic_title": "CellScript Package and Deployment Registry: Early Design Discussion",
          "topic_slug": "cellscript-package-and-deployment-registry-early-design-discussion",
          "author": "ArthurZhang",
          "created_at": "2026-05-06T10:13:15.932000+00:00",
          "updated_at": "2026-05-06T14:38:26.986000+00:00",
          "reply_to_post_number": null,
          "url": "https://talk.nervos.org/t/cellscript-package-and-deployment-registry-early-design-discussion/10210/5",
          "content_text": "Plan for CellScript Package Provenance and Deployment Identity\nStatus: RFC — early design discussion\nScope: Source package registry, deployment registry, lockfile binding, and\nbuilder verification for CellScript on CKB\nTarget Version: 0.19 ~ 0.20\nCore Principle\nCellScript packages should be distributed like development packages, but\nverified like smart-contract deployments.\nThe off-chain registry optimizes for source distribution and developer\nexperience. CKB records only compact, verifiable deployment truth where it is\nactually useful. The lockfile binds the two.\nThree-Layer Identity Model\nCellScript packages exist in three identity layers, each with a distinct\nverification scope:\n┌─────────────────────────────────────────────────────────────┐\n│ Package Identity │\n│ namespace / name / version / source_hash │\n│ Carrier: Cell.toml + source registry index │\n│ Verified: compile time │\n├─────────────────────────────────────────────────────────────┤\n│ Build Identity │\n│ compiler_version / metadata_schema / schema_hash / │\n│ abi_hash / artifact_hash / constraints_hash │\n│ Carrier: Cell.lock [package.build] │\n│ Verified: build time │\n├─────────────────────────────────────────────────────────────┤\n│ Deployment Identity │\n│ chain / network / code_cell / out_point / data_hash / │\n│ dep_type / type_id_lineage / script_role │\n│ Carrier: Deployed.toml │\n│ Verified: runtime / production │\n└─────────────────────────────────────────────────────────────┘\nEach layer is independently meaningful but cryptographically bound to the\nlayers above and below through the lockfile.\nPackage States\nA CellScript package can exist in at least two operational states:\nSource-only / undeployed package. A normal development package containing\n.cell source files, interfaces, schemas, docs, tests, examples, and\nreproducible build metadata. It can be imported, compiled, tested, audited, and\nused as a library dependency. However, it does not by itself claim any\nproduction deployment identity on CKB.\nDeployment-bound package. A package version whose built artifact has been\ndeployed, and whose deployment identity can be verified. For CKB, this means\nbinding the package version to facts such as CellDep, OutPoint, data_hash,\ndep_type, script/code hash, schema/ABI commitments, constraints report,\ncompiler version, and possibly type-id lineage.\nA deployment-bound package is what wallets and production builders should rely\non when constructing real transactions.\nThe same source package version may have zero, one, or many deployment\nbindings. For example, amm@1.2.0 may start as a source-only package, later\ngain a CKB testnet deployment, then eventually a CKB mainnet deployment. These\nare separate deployment records attached to the same source/package identity,\nnot separate source packages.\namm@1.2.0\n├─ source: blake2b:0xabcd...\n├─ build: artifact=0x1234... abi=0xdef0...\n├─ deployed:\n│ ├─ aggron4: out_point=0xaaaa...:0 status=active\n│ └─ mainnet: status=candidate\n└─ (same source version, multiple deployment bindings)\nWhy Not Pure On-Chain Packages?\nIt is unlikely that publishing every CellScript source package directly to CKB\nis the right default.\nSource archives, docs, examples, tests, schema manifests, and editor metadata\nare development artifacts, not consensus-critical state. Frequent package\nreleases would create unnecessary permanent state churn, and CKB capacity costs\nmake source-package storage especially unattractive.\nThe chain should probably record compact deployment facts and commitments, not\nreplace the whole source distribution system.\nWhy Not Pure Off-Chain Packages?\nA pure off-chain registry also seems insufficient.\nFor production CKB contracts, builders and wallets need concrete deployment\nidentity: CellDep, OutPoint, data_hash, dep_type, script/code hash checks,\nschema/ABI commitments, and ideally provenance back to the source package,\ncompiler version, and constraints report.\nA compromised or stale source registry should not be enough to trick a\nproduction builder into using the wrong deployed artifact.\nFile Responsibility Split\nInspired by Move/Sui’s Move.toml / Move.lock / Published.toml separation,\nbut adapted to CKB’s CellDep/OutPoint-based deployment model rather than Sui’s\nnative package-object model.\nCell.toml — Source Package Declaration (Unchanged)\nCell.toml continues to serve as the source-facing package manifest. No\nstructural changes are required for registry support.\n[package]\nname = \"amm\"\nversion = \"1.2.0\"\nnamespace = \"cellscript\"\n[dependencies]\ntoken = { version = \"0.3.0\", path = \"../token\" }\n[build]\ntarget_profile = \"ckb\"\n[deploy.ckb]\nhash_type = \"data1\"\ndep_type = \"code\"\n[[deploy.ckb.cell_deps]]\nname = \"secp256k1\"\nout_point = \"0x...:0\"\ndep_type = \"dep_group\"\nhash_type = \"type\"\nKey invariant: Cell.toml describes deployment intents (what hash_type\nshould be), not deployment facts (which specific out_point was deployed to).\nIntents are determined at compile time; facts are determined after deployment.\nThe [deploy.ckb] section already exists in the current Cell.toml schema.\nThe compiler validates hash_type and dep_type values as compile errors, not\nwarnings, because a builder that uses the wrong script hash mode or cell dep\nmode can deploy a transaction that differs from the audited artifact identity.\nCell.lock — Build Identity Lock (Extended)\nThe existing Cell.lock records dependency versions and sources. The registry\nextension adds build identity hashes and deployment references.\nLockfile schema:\nversion = 1\n[meta]\ncellscript_version = \"0.19.0\"\nlock_schema = \"cellscript-lock-v1\"\n[package]\nname = \"amm\"\nversion = \"1.2.0\"\nsource_hash = \"blake2b:0xabcd...\"\n[package.build]\ncompiler_version = \"0.19.0\"\ntarget_profile = \"ckb\"\nartifact_hash = \"blake2b:0x1234...\"\nmetadata_hash = \"blake2b:0x5678...\"\nschema_hash = \"blake2b:0x9abc...\"\nabi_hash = \"blake2b:0xdef0...\"\nconstraints_hash = \"blake2b:0x1111...\"\n[dependencies.token]\nversion = \"0.3.0\"\nsource = { path = \"../token\" }\nsource_hash = \"blake2b:0x2222...\"\nbuild = { artifact_hash = \"blake2b:0x3333...\", abi_hash = \"blake2b:0x4444...\" }\n[deployment.ckb.aggron4]\nstatus = \"deployed\"\nrecord = \"ckb-testnet:0x5678...\"\nrecord_hash = \"blake2b:0x9a9a...\"\n[deployment.ckb.mainnet]\nstatus = \"undeployed\"\nCross-file binding: The record field references the deployment by network\nand identifier. The record_hash field is the Blake2b-256 hash of the\ncorresponding [[deployments]] entry in Deployed.toml, serialized as\ncanonical JSON (not canonical TOML). TOML has no standardized canonical\nserialization; JSON does. This is consistent with the existing metadata_hash\ncomputation in src/cli/commands.rs, which uses ckb_blake2b256(serde_json::to_vec(&metadata)).\nThe record_hash computation:\nDeserialize the [[deployments]] TOML entry into a Rust struct.\nSerialize the struct to canonical JSON (serde_json::to_string with sorted\nkeys, compact, no whitespace).\nrecord_hash = ckb_blake2b256(canonical_json_bytes).\nPhase 1 makes record_hash optional: if present, cellc registry verify\nchecks that it matches the actual Deployed.toml entry; if absent, the\nverification step is skipped with a warning. Future phases may require\nrecord_hash for production packages.\nBackward compatibility: The lockfile uses a single version 1 schema.\nThe [package.build] and [deployment.*] sections are optional; their absence\nsimply means the package has not been built or deployed yet.\nKey invariants:\nCell.lock is the cryptographic bind point between source and deployment.\nAny hash mismatch between Cell.lock, compiled artifacts, and Deployed.toml\nrecords causes fail-closed rejection.\nThe [deployment.*] section references deployment records in Deployed.toml\nby network. It does not duplicate the full deployment facts; those live in\nDeployed.toml.\nStale or mismatched artifact/metadata/deployment hashes fail closed.\nDeployed.toml — Deployment Fact Record (New)\nDeployed.toml is the CKB analogue of Move/Sui’s Published.toml. It is\nautomatically generated by the deployment tool after the on-chain transaction is\nconfirmed, and records immutable deployment facts derived from the chain.\nWho Generates and Manages Deployed.toml\nDeployed.toml is generated by the CellScript deployment tool (cellscript-deploy\nor the adapter crate’s CellScriptAdapter::deploy_artifact() API). It is not\nhand-authored.\nThe generation path is trust-free by construction: the existing adapter crate\narchitecture is headless-first, meaning all deployment facts are computed\nlocally before the transaction is submitted, and the only chain-derived value\nneeded after submission is the tx_hash.\nGeneration flow (matches existing deploy_artifact → build_deploy_transaction\n→ build_deployment_manifest_from_evidence pipeline):\n1. cellc build\n→ produces artifact, metadata, constraints, schema, ABI\n→ all build hashes computed locally (artifact_hash, metadata_hash,\nschema_hash, abi_hash, constraints_hash)\n2. build_deploy_transaction(spec)\n→ headless: computes TYPE_ID args, data_hash, code_hash,\noccupied capacity, change output locally\n→ returns (TransactionView, ResolvedDeployEvidence)\n→ evidence already contains: code_hash, hash_type, type_id_args,\nartifact_hash, occupied_capacity, tx_size\n3. submit + wait_for_commitment\n→ sends transaction through full node RPC\n→ waits for committed status\n→ receives tx_hash from the node response\n4. build_deployment_manifest_from_evidence(evidence, tx_hash, output_index)\n→ constructs DeploymentManifest from locally-computed evidence + tx_hash\n→ no get_transaction call needed: all hash fields already known\n→ extends to Deployed.toml by adding network, chain_id, build section,\nand Cell.lock record_hash\nWhy no get_transaction / on-chain re-derivation is needed: The existing\nadapter crate’s build_deploy_transaction already computes data_hash = blake2b(artifact_binary) locally (line 447 of lib.rs). The\nResolvedDeployEvidence already carries code_hash, hash_type, and\ntype_id_args. The only chain-derived value is tx_hash, which is returned\nby send_transaction. The full node RPC is used for submission and commitment\nwaiting, not for re-deriving fields that the tool already knows.\nVerification path (not generation) is where on-chain reads happen:\ncellc registry verify calls get_live_cell to confirm that the on-chain code\ncell’s data matches data_hash in Deployed.toml. This separation of\ngeneration and verification keeps the trust model clean: generation is\nself-contained, verification is independently reproducible.\nData source requirement: Phase 1 requires a CKB full node RPC endpoint\nfor transaction submission and commitment waiting (already required by\nCellScriptAdapter::connect()). Verification additionally uses get_live_cell.\nLight client support is a possible Phase 3 enhancement.\nImmutability: Once generated, Deployed.toml must not be modified. Any\nre-deployment or upgrade produces a new [[deployments]] entry with a distinct\nset of chain facts, not an edit to an existing entry.\nversion = 1\nschema = \"cellscript-deployed-v0.19\"\n[package]\nname = \"amm\"\nversion = \"1.2.0\"\nsource_hash = \"blake2b:0xabcd...\"\n[build]\ncompiler_version = \"0.19.0\"\nartifact_hash = \"blake2b:0x1234...\"\nmetadata_hash = \"blake2b:0x5678...\"\nschema_hash = \"blake2b:0x9abc...\"\nabi_hash = \"blake2b:0xdef0...\"\nconstraints_hash = \"blake2b:0x1111...\"\n[[deployments]]\nnetwork = \"aggron4\"\nchain_id = \"ckb-testnet\"\nscript_role = \"type\"\ntx_hash = \"0xaaaa...\"\noutput_index = 0\ncode_hash = \"0xbbbb...\"\nhash_type = \"data1\"\ndep_type = \"code\"\nout_point = \"0xaaaa...:0\"\ndata_hash = \"0xcccc...\"\ntype_id = \"0xdddd...\"\n[[deployments.cell_deps]]\nname = \"secp256k1\"\ntx_hash = \"0xeeee...\"\noutput_index = 1\ndep_type = \"dep_group\"\nhash_type = \"type\"\n[[deployments]]\nnetwork = \"ckb-mainnet\"\nchain_id = \"ckb-mainnet\"\nscript_role = \"type\"\nstatus = \"candidate\"\nRelationship to existing DeploymentManifest: The current\nDeploymentManifest type in crates/cellscript-ckb-adapter/src/lib.rs has\nDeploymentRef with name/code_hash/hash_type/args/dep_type/out_point.\nDeployed.toml is an enhanced deployment manifest that adds:\nnetwork and chain_id — which chain this deployment targets\nscript_role — lock, type, dual-role, or helper dependency\ndata_hash — the data hash of the deployed code cell\ntype_id — TYPE_ID upgrade lineage where applicable\nstatus — deployment lifecycle state\nThe full [build] section — binding the deployment to build identity\nThe adapter crate’s load_deployment_manifest /\nparse_deployment_manifest functions should be extended to support the new\nschema while maintaining backward compatibility with the existing\ncellscript-ckb-deployment-manifest-v0.19 schema.\nDeployment Record Field Classification\nFields are classified by necessity:\nProposed Fields (Phase 1 — minimum for deploy verifiable)\nField\nPurpose\nnetwork\nWhich network this deployment targets\nchain_id\nChain identifier\ntx_hash\nDeployment transaction hash\noutput_index\nOutput index in deployment transaction\ncode_hash\nScript identity\nhash_type\ndata / type / data1 / data2\ndep_type\ncode / dep_group\ndata_hash\nArtifact data hash\nout_point\nCellDep reference\nRecommended Fields (Phase 1 — build provenance binding)\nField\nPurpose\nartifact_hash\nRISC-V binary hash\nmetadata_hash\nCompiler metadata hash\nschema_hash\nSchema manifest hash\nabi_hash\nABI hash\nconstraints_hash\nConstraints report hash\ncompiler_version\nCompiler version that produced the artifact\nOptional Fields (Phase 2 — governance and upgrade)\nField\nPurpose\ntype_id\nTYPE_ID upgrade lineage\nscript_role\nlock / type / dual-role / helper\nstatus\nactive / candidate / deprecated / revoked\nupgrade_lineage\nTYPE_ID upgrade chain\naudit_report_hash\nAudit report hash\npublisher_signature\nPublisher identity signature\nDeployment Status Lifecycle\ndeploy to network\n(undeployed) ─────────────────► candidate\n│\nconfirm + │ revoke or\naudit pass │ supersede\n▼ ▼\nactive deprecated\n│\nsupersede │\n▼\ndeprecated\n│\nrevoke │\n▼\nrevoked\nA deployment record must not be treated as production-ready until its status\nreaches active. The candidate state allows builders to preview and dry-run\nagainst a deployment, but production transaction construction should require\nactive status unless explicitly overridden.\nSource Package Registry (Off-Chain)\nDesign Choice: Git-Based Index to Start\nThe recommended approach is to start with a Git-repository-based lightweight\nregistry index, similar to how crates.io uses a Git index, and evolve toward an\nindependent service as ecosystem needs grow.\nRationale:\nDoes not block the v0.12 stable release.\nA lightweight approach is achievable in the shortest time.\nThe CKB ecosystem is currently small enough that a full registry service\nwould be over-engineering.\nA Git-repository index with content-addressed source packages already\nsatisfies the Phase 1 acceptance criteria in the v0.19 roadmap.\nRegistry Index Format\nThe index repository is structured by namespace and name prefix, similar to\ncrates.io’s Git index:\nregistry/\n├── cellscript/\n│ ├── amm.json\n│ └── token.json\n└── other-protocol/\n└── swap.json\nEach index record:\n{\n\"name\": \"cellscript-amm\",\n\"namespace\": \"cellscript\",\n\"versions\": [\n{\n\"version\": \"1.2.0\",\n\"source_hash\": \"blake2b:0xabcd...\",\n\"source_archive\": \"sha256:0xef01...\",\n\"cellscript_version\": \"0.19.0\",\n\"dependencies\": {\n\"token\": \"0.3.0\"\n},\n\"abi_index\": \"blake2b:0xdef0...\",\n\"schema_hash\": \"blake2b:0x9abc...\",\n\"license\": \"MIT\",\n\"released_at\": \"2026-04-24T00:00:00Z\",\n\"yanked\": false,\n\"audit\": {\n\"report_hash\": \"blake2b:0x5555...\",\n\"acceptance_gate\": \"passed\"\n}\n}\n]\n}\nSource Archive Format\nSource packages are distributed as content-addressed archives:\nArchive format: .tar.gz\nContent addressing: SHA-256 of the archive bytes\nThe source_hash in the index is the Blake2b-256 of the canonical source tree\n(the same hash recorded in Cell.lock [package].source_hash)\nThe source_archive in the index is the SHA-256 of the .tar.gz file\nThis two-hash scheme allows:\nVerifying the archive integrity with source_archive (SHA-256)\nVerifying the source tree identity with source_hash (Blake2b-256, matching\nthe CKB hash personalization used by cellc ckb-hash)\nCLI Integration\n# Publish to the source registry\ncellc publish\n# Install from the source registry\ncellc install token@0.3.0\n# Verify package integrity against source and build artifacts\ncellc package verify\n# Verify deployment identity against chain facts\ncellc registry verify\nThe resolve_from_registry method in src/package/mod.rs currently returns\nan error stating “registry dependency is not supported yet; use a local path\ndependency.” The registry implementation replaces this stub with actual index\nresolution, archive download, hash verification, and Cell.toml parsing.\nDeployment Registry (Chain-Indexed)\nDesign Choice: Off-Chain First, Chain-Indexed When Needed\nPhase 1: Pure off-chain Deployed.toml records, verified through\nCell.lock hash binding.\nPhase 2: Optional on-chain type script index, driven by ecosystem demand.\nRationale:\nCKB capacity costs make on-chain source-package storage unattractive.\nDeployment facts through Deployed.toml + Cell.lock hash binding are\nsufficient for builder-level verification.\nAn on-chain index script adds complexity and should be driven by actual\necosystem demand, not speculative design.\nBuilder Verification Flow\nThe builder must verify the full identity chain before constructing a\nproduction transaction:\ncellc build\n→ generates artifact, metadata, schema, abi, constraints\n→ writes Cell.lock [package.build]\ncellc deploy-plan\n→ reads Cell.lock [package.build]\n→ reads Cell.toml [deploy.ckb] intent\n→ produces deployment plan JSON\nAfter deployment transaction is confirmed on-chain\n→ generates Deployed.toml (chain facts)\n→ updates Cell.lock [deployment.ckb.<network>]\ncellc registry verify\n→ reads Cell.lock build hashes\n→ reads Deployed.toml deployment facts\n→ verifies:\n1. source_hash matches between Cell.lock and Deployed.toml\n2. artifact_hash matches between Cell.lock and Deployed.toml\n3. data_hash = blake2b(artifact) against on-chain code cell\n4. code_hash in Deployed.toml matches on-chain script\n5. out_point is reachable as CellDep\n6. schema_hash / abi_hash consistent with metadata\n7. constraints_hash consistent with constraints report\n→ any mismatch → FAIL CLOSED\nAction Builder Integration\nThe CellScript Action Builder (the v0.19 P0 target) consumes deployment\nregistry records through the registry-client module:\n┌──────────────┐ ┌──────────────────┐ ┌───────────────┐\n│ metadata- │ │ registry-client │ │ cell-resolver │\n│ loader │────►│ │────►│ │\n│ │ │ resolve package │ │ select live │\n│ load/validate│ │ resolve deploy │ │ cells via │\n│ metadata, │ │ verify hashes │ │ CCC/indexer │\n│ ABI, recipe │ │ against lockfile │ │ │\n└──────────────┘ └──────────────────┘ └───────────────┘\nThe registry-client module is responsible for:\nResolving package records from the source registry index.\nResolving deployment records from Deployed.toml.\nVerifying that resolved hashes match Cell.lock.\nRejecting hash mismatches, missing ABI records, and incompatible metadata\nschema versions.\nThe Action Builder must not accept a package by name alone. It must verify that\nthe resolved source package, build artifact, constraints report, and CKB\ndeployment identity all match.\nconstraints_hash Generation\nThe constraints_hash field is critical for deployment safety: it binds the\ndeployment to the exact set of constraints the compiler generated, preventing\na compromised constraints report from being substituted after deployment.\nPhase 1 approach — same-version stability: cellc build generates\nconstraints_hash using the same method as the existing metadata_hash\ncomputation:\nconstraints_hash = ckb_blake2b256(serde_json::to_vec(&constraints))\nThis matches the existing pattern in src/cli/commands.rs where\nmetadata_hash is computed as ckb_blake2b256(serde_json::to_vec(&result.metadata)).\nDeterminism guarantees in Phase 1:\nSame compiler version + same source + same compile options → same\nConstraintsMetadata struct → same serde_json::to_vec output → same\nconstraints_hash. This is sufficient for Phase 1 because constraints_hash\nis only compared within the same compiler version.\nThe ConstraintsMetadata struct fields are ordered by Rust struct field\ndefinition order, which is stable within a compiler version.\nVec fields (entry_abi, runtime_errors, warnings, failures) are\nemitted in the compiler’s internal iteration order, which is deterministic\nfor the same input within the same compiler version.\nKnown limitation: Cross-compiler-version constraints_hash comparison is\nnot supported and should not be attempted. The metadata_schema_version field\nin CompileMetadata serves as the version gate — if schema versions differ,\nverification must reject the comparison, not attempt hash matching.\nPhase 2 enhancement: For stronger cross-build determinism (e.g.,\nverifying that two independent builds of the same source produce the same\nconstraints_hash), the ConstraintsMetadata struct should:\nSort all Vec fields by a stable key (entry_name, code, etc.)\nReplace any HashMap with BTreeMap for key ordering\nPin the serde_json serialization to compact output with sorted keys\nThese changes are backward-compatible: they only affect the hash computation,\nnot the schema. A Phase 2 migration can compute both the old and new hashes\nto bridge the transition.",
          "content_html": "<h1><a name=\"p-24148-plan-for-cellscript-package-provenance-and-deployment-identity-1\" class=\"anchor\" href=\"#p-24148-plan-for-cellscript-package-provenance-and-deployment-identity-1\" aria-label=\"Heading link\"></a>Plan for CellScript Package Provenance and Deployment Identity</h1>\n<p><strong>Status</strong>: RFC — early design discussion<br>\n<strong>Scope</strong>: Source package registry, deployment registry, lockfile binding, and<br>\nbuilder verification for CellScript on CKB<br>\n<strong>Target Version</strong>: 0.19 ~ 0.20</p>\n<h2><a name=\"p-24148-core-principle-2\" class=\"anchor\" href=\"#p-24148-core-principle-2\" aria-label=\"Heading link\"></a>Core Principle</h2>\n<blockquote>\n<p>CellScript packages should be distributed like development packages, but<br>\nverified like smart-contract deployments.</p>\n</blockquote>\n<p>The off-chain registry optimizes for source distribution and developer<br>\nexperience. CKB records only compact, verifiable deployment truth where it is<br>\nactually useful. The lockfile binds the two.</p>\n<h2><a name=\"p-24148-three-layer-identity-model-3\" class=\"anchor\" href=\"#p-24148-three-layer-identity-model-3\" aria-label=\"Heading link\"></a>Three-Layer Identity Model</h2>\n<p>CellScript packages exist in three identity layers, each with a distinct<br>\nverification scope:</p>\n<pre><code class=\"lang-auto\">┌─────────────────────────────────────────────────────────────┐\n│  Package Identity                                           │\n│  namespace / name / version / source_hash                   │\n│  Carrier: Cell.toml + source registry index                 │\n│  Verified: compile time                                     │\n├─────────────────────────────────────────────────────────────┤\n│  Build Identity                                             │\n│  compiler_version / metadata_schema / schema_hash /         │\n│  abi_hash / artifact_hash / constraints_hash                │\n│  Carrier: Cell.lock [package.build]                         │\n│  Verified: build time                                       │\n├─────────────────────────────────────────────────────────────┤\n│  Deployment Identity                                        │\n│  chain / network / code_cell / out_point / data_hash /      │\n│  dep_type / type_id_lineage / script_role                   │\n│  Carrier: Deployed.toml                                     │\n│  Verified: runtime / production                             │\n└─────────────────────────────────────────────────────────────┘\n</code></pre>\n<p>Each layer is independently meaningful but cryptographically bound to the<br>\nlayers above and below through the lockfile.</p>\n<h3><a name=\"p-24148-package-states-4\" class=\"anchor\" href=\"#p-24148-package-states-4\" aria-label=\"Heading link\"></a>Package States</h3>\n<p>A CellScript package can exist in at least two operational states:</p>\n<p><strong>Source-only / undeployed package.</strong> A normal development package containing<br>\n<code>.cell</code> source files, interfaces, schemas, docs, tests, examples, and<br>\nreproducible build metadata. It can be imported, compiled, tested, audited, and<br>\nused as a library dependency. However, it does not by itself claim any<br>\nproduction deployment identity on CKB.</p>\n<p><strong>Deployment-bound package.</strong> A package version whose built artifact has been<br>\ndeployed, and whose deployment identity can be verified. For CKB, this means<br>\nbinding the package version to facts such as CellDep, OutPoint, data_hash,<br>\ndep_type, script/code hash, schema/ABI commitments, constraints report,<br>\ncompiler version, and possibly type-id lineage.</p>\n<p>A deployment-bound package is what wallets and production builders should rely<br>\non when constructing real transactions.</p>\n<p>The same source package version may have zero, one, or many deployment<br>\nbindings. For example, <code>amm@1.2.0</code> may start as a source-only package, later<br>\ngain a CKB testnet deployment, then eventually a CKB mainnet deployment. These<br>\nare separate deployment records attached to the same source/package identity,<br>\nnot separate source packages.</p>\n<pre><code class=\"lang-auto\">amm@1.2.0\n  ├─ source:  blake2b:0xabcd...\n  ├─ build:   artifact=0x1234... abi=0xdef0...\n  ├─ deployed:\n  │   ├─ aggron4:  out_point=0xaaaa...:0  status=active\n  │   └─ mainnet:  status=candidate\n  └─ (same source version, multiple deployment bindings)\n</code></pre>\n<h2><a name=\"p-24148-why-not-pure-on-chain-packages-5\" class=\"anchor\" href=\"#p-24148-why-not-pure-on-chain-packages-5\" aria-label=\"Heading link\"></a>Why Not Pure On-Chain Packages?</h2>\n<p>It is unlikely that publishing every CellScript source package directly to CKB<br>\nis the right default.</p>\n<p>Source archives, docs, examples, tests, schema manifests, and editor metadata<br>\nare development artifacts, not consensus-critical state. Frequent package<br>\nreleases would create unnecessary permanent state churn, and CKB capacity costs<br>\nmake source-package storage especially unattractive.</p>\n<p>The chain should probably record compact deployment facts and commitments, not<br>\nreplace the whole source distribution system.</p>\n<h2><a name=\"p-24148-why-not-pure-off-chain-packages-6\" class=\"anchor\" href=\"#p-24148-why-not-pure-off-chain-packages-6\" aria-label=\"Heading link\"></a>Why Not Pure Off-Chain Packages?</h2>\n<p>A pure off-chain registry also seems insufficient.</p>\n<p>For production CKB contracts, builders and wallets need concrete deployment<br>\nidentity: CellDep, OutPoint, data_hash, dep_type, script/code hash checks,<br>\nschema/ABI commitments, and ideally provenance back to the source package,<br>\ncompiler version, and constraints report.</p>\n<p>A compromised or stale source registry should not be enough to trick a<br>\nproduction builder into using the wrong deployed artifact.</p>\n<h2><a name=\"p-24148-file-responsibility-split-7\" class=\"anchor\" href=\"#p-24148-file-responsibility-split-7\" aria-label=\"Heading link\"></a>File Responsibility Split</h2>\n<p>Inspired by Move/Sui’s <code>Move.toml</code> / <code>Move.lock</code> / <code>Published.toml</code> separation,<br>\nbut adapted to CKB’s CellDep/OutPoint-based deployment model rather than Sui’s<br>\nnative package-object model.</p>\n<h3><a name=\"p-24148-celltoml-source-package-declaration-unchanged-8\" class=\"anchor\" href=\"#p-24148-celltoml-source-package-declaration-unchanged-8\" aria-label=\"Heading link\"></a>Cell.toml — Source Package Declaration (Unchanged)</h3>\n<p><code>Cell.toml</code> continues to serve as the source-facing package manifest. No<br>\nstructural changes are required for registry support.</p>\n<pre data-code-wrap=\"toml\"><code class=\"lang-toml\">[package]\nname = \"amm\"\nversion = \"1.2.0\"\nnamespace = \"cellscript\"\n\n[dependencies]\ntoken = { version = \"0.3.0\", path = \"../token\" }\n\n[build]\ntarget_profile = \"ckb\"\n\n[deploy.ckb]\nhash_type = \"data1\"\ndep_type = \"code\"\n\n[[deploy.ckb.cell_deps]]\nname = \"secp256k1\"\nout_point = \"0x...:0\"\ndep_type = \"dep_group\"\nhash_type = \"type\"\n</code></pre>\n<p><strong>Key invariant</strong>: <code>Cell.toml</code> describes deployment <em>intents</em> (what hash_type<br>\nshould be), not deployment <em>facts</em> (which specific out_point was deployed to).<br>\nIntents are determined at compile time; facts are determined after deployment.</p>\n<p>The <code>[deploy.ckb]</code> section already exists in the current <code>Cell.toml</code> schema.<br>\nThe compiler validates <code>hash_type</code> and <code>dep_type</code> values as compile errors, not<br>\nwarnings, because a builder that uses the wrong script hash mode or cell dep<br>\nmode can deploy a transaction that differs from the audited artifact identity.</p>\n<h3><a name=\"p-24148-celllock-build-identity-lock-extended-9\" class=\"anchor\" href=\"#p-24148-celllock-build-identity-lock-extended-9\" aria-label=\"Heading link\"></a>Cell.lock — Build Identity Lock (Extended)</h3>\n<p>The existing <code>Cell.lock</code> records <strong>dependency versions</strong> and <strong>sources</strong>. The registry<br>\nextension adds build identity hashes and deployment references.</p>\n<p><strong>Lockfile schema</strong>:</p>\n<pre data-code-wrap=\"toml\"><code class=\"lang-toml\">version = 1\n\n[meta]\ncellscript_version = \"0.19.0\"\nlock_schema = \"cellscript-lock-v1\"\n\n[package]\nname = \"amm\"\nversion = \"1.2.0\"\nsource_hash = \"blake2b:0xabcd...\"\n\n[package.build]\ncompiler_version = \"0.19.0\"\ntarget_profile = \"ckb\"\nartifact_hash = \"blake2b:0x1234...\"\nmetadata_hash = \"blake2b:0x5678...\"\nschema_hash = \"blake2b:0x9abc...\"\nabi_hash = \"blake2b:0xdef0...\"\nconstraints_hash = \"blake2b:0x1111...\"\n\n[dependencies.token]\nversion = \"0.3.0\"\nsource = { path = \"../token\" }\nsource_hash = \"blake2b:0x2222...\"\nbuild = { artifact_hash = \"blake2b:0x3333...\", abi_hash = \"blake2b:0x4444...\" }\n\n[deployment.ckb.aggron4]\nstatus = \"deployed\"\nrecord = \"ckb-testnet:0x5678...\"\nrecord_hash = \"blake2b:0x9a9a...\"\n\n[deployment.ckb.mainnet]\nstatus = \"undeployed\"\n</code></pre>\n<p><strong>Cross-file binding</strong>: The <code>record</code> field references the deployment by network<br>\nand identifier. The <code>record_hash</code> field is the Blake2b-256 hash of the<br>\ncorresponding <code>[[deployments]]</code> entry in <code>Deployed.toml</code>, serialized as<br>\n<strong>canonical JSON</strong> (not canonical TOML). TOML has no standardized canonical<br>\nserialization; JSON does. This is consistent with the existing <code>metadata_hash</code><br>\ncomputation in <code>src/cli/commands.rs</code>, which uses <code>ckb_blake2b256(serde_json::to_vec(&amp;metadata))</code>.</p>\n<p>The <code>record_hash</code> computation:</p>\n<ol>\n<li>Deserialize the <code>[[deployments]]</code> TOML entry into a Rust struct.</li>\n<li>Serialize the struct to canonical JSON (<code>serde_json::to_string</code> with sorted<br>\nkeys, compact, no whitespace).</li>\n<li><code>record_hash = ckb_blake2b256(canonical_json_bytes)</code>.</li>\n</ol>\n<p>Phase 1 makes <code>record_hash</code> optional: if present, <code>cellc registry verify</code><br>\nchecks that it matches the actual <code>Deployed.toml</code> entry; if absent, the<br>\nverification step is skipped with a warning. Future phases may require<br>\n<code>record_hash</code> for production packages.</p>\n<p><strong>Backward compatibility</strong>: The lockfile uses a single version 1 schema.<br>\nThe <code>[package.build]</code> and <code>[deployment.*]</code> sections are optional; their absence<br>\nsimply means the package has not been built or deployed yet.</p>\n<p><strong>Key invariants</strong>:</p>\n<ul>\n<li><code>Cell.lock</code> is the cryptographic bind point between source and deployment.</li>\n<li>Any hash mismatch between <code>Cell.lock</code>, compiled artifacts, and <code>Deployed.toml</code><br>\nrecords causes fail-closed rejection.</li>\n<li>The <code>[deployment.*]</code> section references deployment records in <code>Deployed.toml</code><br>\nby network. It does not duplicate the full deployment facts; those live in<br>\n<code>Deployed.toml</code>.</li>\n<li>Stale or mismatched artifact/metadata/deployment hashes fail closed.</li>\n</ul>\n<h3><a name=\"p-24148-deployedtoml-deployment-fact-record-new-10\" class=\"anchor\" href=\"#p-24148-deployedtoml-deployment-fact-record-new-10\" aria-label=\"Heading link\"></a>Deployed.toml — Deployment Fact Record (New)</h3>\n<p><code>Deployed.toml</code> is the CKB analogue of Move/Sui’s <code>Published.toml</code>. It is<br>\nautomatically generated by the deployment tool after the on-chain transaction is<br>\nconfirmed, and records immutable deployment facts derived from the chain.</p>\n<h4><a name=\"p-24148-who-generates-and-manages-deployedtoml-11\" class=\"anchor\" href=\"#p-24148-who-generates-and-manages-deployedtoml-11\" aria-label=\"Heading link\"></a>Who Generates and Manages Deployed.toml</h4>\n<p><code>Deployed.toml</code> is generated by the CellScript deployment tool (<code>cellscript-deploy</code><br>\nor the adapter crate’s <code>CellScriptAdapter::deploy_artifact()</code> API). It is not<br>\nhand-authored.</p>\n<p>The generation path is trust-free by construction: the existing adapter crate<br>\narchitecture is headless-first, meaning all deployment facts are computed<br>\nlocally before the transaction is submitted, and the only chain-derived value<br>\nneeded after submission is the <code>tx_hash</code>.</p>\n<p><strong>Generation flow</strong> (matches existing <code>deploy_artifact</code> → <code>build_deploy_transaction</code><br>\n→ <code>build_deployment_manifest_from_evidence</code> pipeline):</p>\n<pre><code class=\"lang-auto\">1. cellc build\n   → produces artifact, metadata, constraints, schema, ABI\n   → all build hashes computed locally (artifact_hash, metadata_hash,\n     schema_hash, abi_hash, constraints_hash)\n\n2. build_deploy_transaction(spec)\n   → headless: computes TYPE_ID args, data_hash, code_hash,\n     occupied capacity, change output locally\n   → returns (TransactionView, ResolvedDeployEvidence)\n   → evidence already contains: code_hash, hash_type, type_id_args,\n     artifact_hash, occupied_capacity, tx_size\n\n3. submit + wait_for_commitment\n   → sends transaction through full node RPC\n   → waits for committed status\n   → receives tx_hash from the node response\n\n4. build_deployment_manifest_from_evidence(evidence, tx_hash, output_index)\n   → constructs DeploymentManifest from locally-computed evidence + tx_hash\n   → no get_transaction call needed: all hash fields already known\n   → extends to Deployed.toml by adding network, chain_id, build section,\n     and Cell.lock record_hash\n</code></pre>\n<p><strong>Why no <code>get_transaction</code> / on-chain re-derivation is needed</strong>: The existing<br>\nadapter crate’s <code>build_deploy_transaction</code> already computes <code>data_hash = blake2b(artifact_binary)</code> locally (line 447 of <code>lib.rs</code>). The<br>\n<code>ResolvedDeployEvidence</code> already carries <code>code_hash</code>, <code>hash_type</code>, and<br>\n<code>type_id_args</code>. The only chain-derived value is <code>tx_hash</code>, which is returned<br>\nby <code>send_transaction</code>. The full node RPC is used for submission and commitment<br>\nwaiting, not for re-deriving fields that the tool already knows.</p>\n<p><strong>Verification path</strong> (not generation) is where on-chain reads happen:<br>\n<code>cellc registry verify</code> calls <code>get_live_cell</code> to confirm that the on-chain code<br>\ncell’s data matches <code>data_hash</code> in <code>Deployed.toml</code>. This separation of<br>\ngeneration and verification keeps the trust model clean: generation is<br>\nself-contained, verification is independently reproducible.</p>\n<p><strong>Data source requirement</strong>: Phase 1 requires a CKB full node RPC endpoint<br>\nfor transaction submission and commitment waiting (already required by<br>\n<code>CellScriptAdapter::connect()</code>). Verification additionally uses <code>get_live_cell</code>.<br>\nLight client support is a possible Phase 3 enhancement.</p>\n<p><strong>Immutability</strong>: Once generated, <code>Deployed.toml</code> must not be modified. Any<br>\nre-deployment or upgrade produces a new <code>[[deployments]]</code> entry with a distinct<br>\nset of chain facts, not an edit to an existing entry.</p>\n<pre data-code-wrap=\"toml\"><code class=\"lang-toml\">version = 1\nschema = \"cellscript-deployed-v0.19\"\n\n[package]\nname = \"amm\"\nversion = \"1.2.0\"\nsource_hash = \"blake2b:0xabcd...\"\n\n[build]\ncompiler_version = \"0.19.0\"\nartifact_hash = \"blake2b:0x1234...\"\nmetadata_hash = \"blake2b:0x5678...\"\nschema_hash = \"blake2b:0x9abc...\"\nabi_hash = \"blake2b:0xdef0...\"\nconstraints_hash = \"blake2b:0x1111...\"\n\n[[deployments]]\nnetwork = \"aggron4\"\nchain_id = \"ckb-testnet\"\nscript_role = \"type\"\ntx_hash = \"0xaaaa...\"\noutput_index = 0\ncode_hash = \"0xbbbb...\"\nhash_type = \"data1\"\ndep_type = \"code\"\nout_point = \"0xaaaa...:0\"\ndata_hash = \"0xcccc...\"\ntype_id = \"0xdddd...\"\n\n[[deployments.cell_deps]]\nname = \"secp256k1\"\ntx_hash = \"0xeeee...\"\noutput_index = 1\ndep_type = \"dep_group\"\nhash_type = \"type\"\n\n[[deployments]]\nnetwork = \"ckb-mainnet\"\nchain_id = \"ckb-mainnet\"\nscript_role = \"type\"\nstatus = \"candidate\"\n</code></pre>\n<p><strong>Relationship to existing <code>DeploymentManifest</code></strong>: The current<br>\n<code>DeploymentManifest</code> type in <code>crates/cellscript-ckb-adapter/src/lib.rs</code> has<br>\n<code>DeploymentRef</code> with <code>name/code_hash/hash_type/args/dep_type/out_point</code>.<br>\n<code>Deployed.toml</code> is an enhanced deployment manifest that adds:</p>\n<ul>\n<li><code>network</code> and <code>chain_id</code> — which chain this deployment targets</li>\n<li><code>script_role</code> — lock, type, dual-role, or helper dependency</li>\n<li><code>data_hash</code> — the data hash of the deployed code cell</li>\n<li><code>type_id</code> — TYPE_ID upgrade lineage where applicable</li>\n<li><code>status</code> — deployment lifecycle state</li>\n<li>The full <code>[build]</code> section — binding the deployment to build identity</li>\n</ul>\n<p>The adapter crate’s <code>load_deployment_manifest</code> /<br>\n<code>parse_deployment_manifest</code> functions should be extended to support the new<br>\nschema while maintaining backward compatibility with the existing<br>\n<code>cellscript-ckb-deployment-manifest-v0.19</code> schema.</p>\n<h2><a name=\"p-24148-deployment-record-field-classification-12\" class=\"anchor\" href=\"#p-24148-deployment-record-field-classification-12\" aria-label=\"Heading link\"></a>Deployment Record Field Classification</h2>\n<p>Fields are classified by necessity:</p>\n<h3><a name=\"p-24148-proposed-fields-phase-1-minimum-for-deploy-verifiable-13\" class=\"anchor\" href=\"#p-24148-proposed-fields-phase-1-minimum-for-deploy-verifiable-13\" aria-label=\"Heading link\"></a>Proposed Fields (Phase 1 — minimum for deploy verifiable)</h3>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>Field</th>\n<th>Purpose</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td><code>network</code></td>\n<td>Which network this deployment targets</td>\n</tr>\n<tr>\n<td><code>chain_id</code></td>\n<td>Chain identifier</td>\n</tr>\n<tr>\n<td><code>tx_hash</code></td>\n<td>Deployment transaction hash</td>\n</tr>\n<tr>\n<td><code>output_index</code></td>\n<td>Output index in deployment transaction</td>\n</tr>\n<tr>\n<td><code>code_hash</code></td>\n<td>Script identity</td>\n</tr>\n<tr>\n<td><code>hash_type</code></td>\n<td>data / type / data1 / data2</td>\n</tr>\n<tr>\n<td><code>dep_type</code></td>\n<td>code / dep_group</td>\n</tr>\n<tr>\n<td><code>data_hash</code></td>\n<td>Artifact data hash</td>\n</tr>\n<tr>\n<td><code>out_point</code></td>\n<td>CellDep reference</td>\n</tr>\n</tbody>\n</table>\n</div><h3><a name=\"p-24148-recommended-fields-phase-1-build-provenance-binding-14\" class=\"anchor\" href=\"#p-24148-recommended-fields-phase-1-build-provenance-binding-14\" aria-label=\"Heading link\"></a>Recommended Fields (Phase 1 — build provenance binding)</h3>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>Field</th>\n<th>Purpose</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td><code>artifact_hash</code></td>\n<td>RISC-V binary hash</td>\n</tr>\n<tr>\n<td><code>metadata_hash</code></td>\n<td>Compiler metadata hash</td>\n</tr>\n<tr>\n<td><code>schema_hash</code></td>\n<td>Schema manifest hash</td>\n</tr>\n<tr>\n<td><code>abi_hash</code></td>\n<td>ABI hash</td>\n</tr>\n<tr>\n<td><code>constraints_hash</code></td>\n<td>Constraints report hash</td>\n</tr>\n<tr>\n<td><code>compiler_version</code></td>\n<td>Compiler version that produced the artifact</td>\n</tr>\n</tbody>\n</table>\n</div><h3><a name=\"p-24148-optional-fields-phase-2-governance-and-upgrade-15\" class=\"anchor\" href=\"#p-24148-optional-fields-phase-2-governance-and-upgrade-15\" aria-label=\"Heading link\"></a>Optional Fields (Phase 2 — governance and upgrade)</h3>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>Field</th>\n<th>Purpose</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td><code>type_id</code></td>\n<td>TYPE_ID upgrade lineage</td>\n</tr>\n<tr>\n<td><code>script_role</code></td>\n<td>lock / type / dual-role / helper</td>\n</tr>\n<tr>\n<td><code>status</code></td>\n<td>active / candidate / deprecated / revoked</td>\n</tr>\n<tr>\n<td><code>upgrade_lineage</code></td>\n<td>TYPE_ID upgrade chain</td>\n</tr>\n<tr>\n<td><code>audit_report_hash</code></td>\n<td>Audit report hash</td>\n</tr>\n<tr>\n<td><code>publisher_signature</code></td>\n<td>Publisher identity signature</td>\n</tr>\n</tbody>\n</table>\n</div><h3><a name=\"p-24148-deployment-status-lifecycle-16\" class=\"anchor\" href=\"#p-24148-deployment-status-lifecycle-16\" aria-label=\"Heading link\"></a>Deployment Status Lifecycle</h3>\n<pre><code class=\"lang-auto\">                 deploy to network\n  (undeployed) ─────────────────► candidate\n                                      │\n                          confirm +   │              revoke or\n                          audit pass  │              supersede\n                                      ▼                  ▼\n                                    active          deprecated\n                                      │\n                          supersede   │\n                                      ▼\n                                    deprecated\n                                      │\n                           revoke     │\n                                      ▼\n                                    revoked\n</code></pre>\n<p>A deployment record must not be treated as production-ready until its status<br>\nreaches <code>active</code>. The <code>candidate</code> state allows builders to preview and dry-run<br>\nagainst a deployment, but production transaction construction should require<br>\n<code>active</code> status unless explicitly overridden.</p>\n<h2><a name=\"p-24148-source-package-registry-off-chain-17\" class=\"anchor\" href=\"#p-24148-source-package-registry-off-chain-17\" aria-label=\"Heading link\"></a>Source Package Registry (Off-Chain)</h2>\n<h3><a name=\"p-24148-design-choice-git-based-index-to-start-18\" class=\"anchor\" href=\"#p-24148-design-choice-git-based-index-to-start-18\" aria-label=\"Heading link\"></a>Design Choice: Git-Based Index to Start</h3>\n<p>The recommended approach is to start with a Git-repository-based lightweight<br>\nregistry index, similar to how <a href=\"http://crates.io\" rel=\"noopener nofollow ugc\">crates.io</a> uses a Git index, and evolve toward an<br>\nindependent service as ecosystem needs grow.</p>\n<p>Rationale:</p>\n<ul>\n<li>Does not block the v0.12 stable release.</li>\n<li>A lightweight approach is achievable in the shortest time.</li>\n<li>The CKB ecosystem is currently small enough that a full registry service<br>\nwould be over-engineering.</li>\n<li>A Git-repository index with content-addressed source packages already<br>\nsatisfies the Phase 1 acceptance criteria in the v0.19 roadmap.</li>\n</ul>\n<h3><a name=\"p-24148-registry-index-format-19\" class=\"anchor\" href=\"#p-24148-registry-index-format-19\" aria-label=\"Heading link\"></a>Registry Index Format</h3>\n<p>The index repository is structured by namespace and name prefix, similar to<br>\n<a href=\"http://crates.io\" rel=\"noopener nofollow ugc\">crates.io</a>’s Git index:</p>\n<pre><code class=\"lang-auto\">registry/\n├── cellscript/\n│   ├── amm.json\n│   └── token.json\n└── other-protocol/\n    └── swap.json\n</code></pre>\n<p>Each index record:</p>\n<pre data-code-wrap=\"json\"><code class=\"lang-json\">{\n  \"name\": \"cellscript-amm\",\n  \"namespace\": \"cellscript\",\n  \"versions\": [\n    {\n      \"version\": \"1.2.0\",\n      \"source_hash\": \"blake2b:0xabcd...\",\n      \"source_archive\": \"sha256:0xef01...\",\n      \"cellscript_version\": \"0.19.0\",\n      \"dependencies\": {\n        \"token\": \"0.3.0\"\n      },\n      \"abi_index\": \"blake2b:0xdef0...\",\n      \"schema_hash\": \"blake2b:0x9abc...\",\n      \"license\": \"MIT\",\n      \"released_at\": \"2026-04-24T00:00:00Z\",\n      \"yanked\": false,\n      \"audit\": {\n        \"report_hash\": \"blake2b:0x5555...\",\n        \"acceptance_gate\": \"passed\"\n      }\n    }\n  ]\n}\n</code></pre>\n<h3><a name=\"p-24148-source-archive-format-20\" class=\"anchor\" href=\"#p-24148-source-archive-format-20\" aria-label=\"Heading link\"></a>Source Archive Format</h3>\n<p>Source packages are distributed as content-addressed archives:</p>\n<ul>\n<li>Archive format: <code>.tar.gz</code></li>\n<li>Content addressing: SHA-256 of the archive bytes</li>\n<li>The <code>source_hash</code> in the index is the Blake2b-256 of the canonical source tree<br>\n(the same hash recorded in <code>Cell.lock [package].source_hash</code>)</li>\n<li>The <code>source_archive</code> in the index is the SHA-256 of the <code>.tar.gz</code> file</li>\n</ul>\n<p>This two-hash scheme allows:</p>\n<ol>\n<li>Verifying the archive integrity with <code>source_archive</code> (SHA-256)</li>\n<li>Verifying the source tree identity with <code>source_hash</code> (Blake2b-256, matching<br>\nthe CKB hash personalization used by <code>cellc ckb-hash</code>)</li>\n</ol>\n<h3><a name=\"p-24148-cli-integration-21\" class=\"anchor\" href=\"#p-24148-cli-integration-21\" aria-label=\"Heading link\"></a>CLI Integration</h3>\n<pre data-code-wrap=\"bash\"><code class=\"lang-bash\"># Publish to the source registry\ncellc publish\n\n# Install from the source registry\ncellc install token@0.3.0\n\n# Verify package integrity against source and build artifacts\ncellc package verify\n\n# Verify deployment identity against chain facts\ncellc registry verify\n</code></pre>\n<p>The <code>resolve_from_registry</code> method in <code>src/package/mod.rs</code> currently returns<br>\nan error stating “registry dependency is not supported yet; use a local path<br>\ndependency.” The registry implementation replaces this stub with actual index<br>\nresolution, archive download, hash verification, and <code>Cell.toml</code> parsing.</p>\n<h2><a name=\"p-24148-deployment-registry-chain-indexed-22\" class=\"anchor\" href=\"#p-24148-deployment-registry-chain-indexed-22\" aria-label=\"Heading link\"></a>Deployment Registry (Chain-Indexed)</h2>\n<h3><a name=\"p-24148-design-choice-off-chain-first-chain-indexed-when-needed-23\" class=\"anchor\" href=\"#p-24148-design-choice-off-chain-first-chain-indexed-when-needed-23\" aria-label=\"Heading link\"></a>Design Choice: Off-Chain First, Chain-Indexed When Needed</h3>\n<p><strong>Phase 1</strong>: Pure off-chain <code>Deployed.toml</code> records, verified through<br>\n<code>Cell.lock</code> hash binding.</p>\n<p><strong>Phase 2</strong>: Optional on-chain type script index, driven by ecosystem demand.</p>\n<p>Rationale:</p>\n<ul>\n<li>CKB capacity costs make on-chain source-package storage unattractive.</li>\n<li>Deployment facts through <code>Deployed.toml</code> + <code>Cell.lock</code> hash binding are<br>\nsufficient for builder-level verification.</li>\n<li>An on-chain index script adds complexity and should be driven by actual<br>\necosystem demand, not speculative design.</li>\n</ul>\n<h3><a name=\"p-24148-builder-verification-flow-24\" class=\"anchor\" href=\"#p-24148-builder-verification-flow-24\" aria-label=\"Heading link\"></a>Builder Verification Flow</h3>\n<p>The builder must verify the full identity chain before constructing a<br>\nproduction transaction:</p>\n<pre><code class=\"lang-auto\">cellc build\n  → generates artifact, metadata, schema, abi, constraints\n  → writes Cell.lock [package.build]\n\ncellc deploy-plan\n  → reads Cell.lock [package.build]\n  → reads Cell.toml [deploy.ckb] intent\n  → produces deployment plan JSON\n\nAfter deployment transaction is confirmed on-chain\n  → generates Deployed.toml (chain facts)\n  → updates Cell.lock [deployment.ckb.&lt;network&gt;]\n\ncellc registry verify\n  → reads Cell.lock build hashes\n  → reads Deployed.toml deployment facts\n  → verifies:\n    1. source_hash matches between Cell.lock and Deployed.toml\n    2. artifact_hash matches between Cell.lock and Deployed.toml\n    3. data_hash = blake2b(artifact) against on-chain code cell\n    4. code_hash in Deployed.toml matches on-chain script\n    5. out_point is reachable as CellDep\n    6. schema_hash / abi_hash consistent with metadata\n    7. constraints_hash consistent with constraints report\n  → any mismatch → FAIL CLOSED\n</code></pre>\n<h3><a name=\"p-24148-action-builder-integration-25\" class=\"anchor\" href=\"#p-24148-action-builder-integration-25\" aria-label=\"Heading link\"></a>Action Builder Integration</h3>\n<p>The CellScript Action Builder (the v0.19 P0 target) consumes deployment<br>\nregistry records through the <code>registry-client</code> module:</p>\n<pre><code class=\"lang-auto\">┌──────────────┐     ┌──────────────────┐     ┌───────────────┐\n│ metadata-    │     │ registry-client  │     │ cell-resolver │\n│ loader       │────►│                  │────►│               │\n│              │     │ resolve package  │     │ select live   │\n│ load/validate│     │ resolve deploy   │     │ cells via     │\n│ metadata,    │     │ verify hashes    │     │ CCC/indexer   │\n│ ABI, recipe  │     │ against lockfile │     │               │\n└──────────────┘     └──────────────────┘     └───────────────┘\n</code></pre>\n<p>The <code>registry-client</code> module is responsible for:</p>\n<ol>\n<li>Resolving package records from the source registry index.</li>\n<li>Resolving deployment records from <code>Deployed.toml</code>.</li>\n<li>Verifying that resolved hashes match <code>Cell.lock</code>.</li>\n<li>Rejecting hash mismatches, missing ABI records, and incompatible metadata<br>\nschema versions.</li>\n</ol>\n<p>The Action Builder must not accept a package by name alone. It must verify that<br>\nthe resolved source package, build artifact, constraints report, and CKB<br>\ndeployment identity all match.</p>\n<h3><a name=\"p-24148-constraints_hash-generation-26\" class=\"anchor\" href=\"#p-24148-constraints_hash-generation-26\" aria-label=\"Heading link\"></a>constraints_hash Generation</h3>\n<p>The <code>constraints_hash</code> field is critical for deployment safety: it binds the<br>\ndeployment to the exact set of constraints the compiler generated, preventing<br>\na compromised constraints report from being substituted after deployment.</p>\n<p><strong>Phase 1 approach — same-version stability</strong>: <code>cellc build</code> generates<br>\n<code>constraints_hash</code> using the same method as the existing <code>metadata_hash</code><br>\ncomputation:</p>\n<pre><code class=\"lang-auto\">constraints_hash = ckb_blake2b256(serde_json::to_vec(&amp;constraints))\n</code></pre>\n<p>This matches the existing pattern in <code>src/cli/commands.rs</code> where<br>\n<code>metadata_hash</code> is computed as <code>ckb_blake2b256(serde_json::to_vec(&amp;result.metadata))</code>.</p>\n<p><strong>Determinism guarantees in Phase 1</strong>:</p>\n<ul>\n<li>Same compiler version + same source + same compile options → same<br>\n<code>ConstraintsMetadata</code> struct → same <code>serde_json::to_vec</code> output → same<br>\n<code>constraints_hash</code>. This is sufficient for Phase 1 because <code>constraints_hash</code><br>\nis only compared within the same compiler version.</li>\n<li>The <code>ConstraintsMetadata</code> struct fields are ordered by Rust struct field<br>\ndefinition order, which is stable within a compiler version.</li>\n<li>Vec fields (<code>entry_abi</code>, <code>runtime_errors</code>, <code>warnings</code>, <code>failures</code>) are<br>\nemitted in the compiler’s internal iteration order, which is deterministic<br>\nfor the same input within the same compiler version.</li>\n</ul>\n<p><strong>Known limitation</strong>: Cross-compiler-version <code>constraints_hash</code> comparison is<br>\nnot supported and should not be attempted. The <code>metadata_schema_version</code> field<br>\nin <code>CompileMetadata</code> serves as the version gate — if schema versions differ,<br>\nverification must reject the comparison, not attempt hash matching.</p>\n<p><strong>Phase 2 enhancement</strong>: For stronger cross-build determinism (e.g.,<br>\nverifying that two independent builds of the same source produce the same<br>\n<code>constraints_hash</code>), the <code>ConstraintsMetadata</code> struct should:</p>\n<ul>\n<li>Sort all <code>Vec</code> fields by a stable key (<code>entry_name</code>, <code>code</code>, etc.)</li>\n<li>Replace any <code>HashMap</code> with <code>BTreeMap</code> for key ordering</li>\n<li>Pin the <code>serde_json</code> serialization to compact output with sorted keys</li>\n</ul>\n<p>These changes are backward-compatible: they only affect the hash computation,<br>\nnot the schema. A Phase 2 migration can compute both the old and new hashes<br>\nto bridge the transition.</p>",
          "like_count": 0,
          "quote_count": 0
        }
      ]
    },
    {
      "topic_id": 9974,
      "title": "Fiber-pay: an ai-friendly CLI for fiber-network",
      "slug": "fiber-pay-an-ai-friendly-cli-for-fiber-network",
      "url": "https://talk.nervos.org/t/fiber-pay-an-ai-friendly-cli-for-fiber-network/9974",
      "created_at": "2026-02-21T04:15:25.343000+00:00",
      "last_posted_at": "2026-05-06T09:50:43.886000+00:00",
      "category_id": 49,
      "tags": [
        "CKB"
      ],
      "posters": [
        "Original Poster, Most Recent Poster",
        "Frequent Poster",
        "Frequent Poster"
      ],
      "recent_posts": [
        {
          "post_id": 24147,
          "post_number": 8,
          "topic_id": 9974,
          "topic_title": "Fiber-pay: an ai-friendly CLI for fiber-network",
          "topic_slug": "fiber-pay-an-ai-friendly-cli-for-fiber-network",
          "author": "RetricSu",
          "created_at": "2026-05-06T09:50:43.886000+00:00",
          "updated_at": "2026-05-06T09:50:43.886000+00:00",
          "reply_to_post_number": null,
          "url": "https://talk.nervos.org/t/fiber-pay-an-ai-friendly-cli-for-fiber-network/9974/8",
          "content_text": "Hello everyone, fiber-pay v0.2.5 is out, and this release brings the ConnectButton component to @fiber-pay/react — a one-line drop-in for connecting a Fiber browser node with explicit passkey or password strategy.\nWhat’s new\nConnectButton — A unified connect/disconnect button that works in standalone mode (<ConnectButton network=\"testnet\" strategy=\"passkey\" />) or pairs with an existing useFiberNode hook for shared state across your app.\nuseFiberNode hardening — Fixed StrictMode remount reliability so async state updates don’t get dropped during React effect re-runs.\nBrowser-wallet demo refresh — The demo app now uses a ConnectButton-first auth flow, showing how the component fits into a real wallet console.\nReact Quick Card demo — A new capability walkthrough showcasing theming, custom dropdown panels, runtime snapshot checks, and FiberPayQuickCard payment integration.\nWhat can be built\nIf you’re building a frontend that accepts Fiber payments, you can now add a connect button and payment card with a few imports — no need to wire up node lifecycle, passkey detection, or connection state yourself. The component handles the boilerplate and exposes hooks for customization when you need it.\nThe interesting calling-agent demo is built using such an SDK. You can also check out the browser-wallet and react-quick-card demos in the repo, or grab the package:\npnpm add @fiber-pay/react\nLet me know if you try it out or have feedback on the API!",
          "content_html": "<p>Hello everyone, fiber-pay <a href=\"https://github.com/RetricSu/fiber-pay/releases/tag/v0.2.5\" rel=\"noopener nofollow ugc\">v0.2.5</a> is out, and this release brings the <code>ConnectButton</code> component to <code>@fiber-pay/react</code> — a one-line drop-in for connecting a Fiber browser node with explicit passkey or password strategy.</p>\n<h2><a name=\"p-24147-whats-new-1\" class=\"anchor\" href=\"#p-24147-whats-new-1\" aria-label=\"Heading link\"></a>What’s new</h2>\n<ul>\n<li><strong><code>ConnectButton</code></strong> — A unified connect/disconnect button that works in standalone mode (<code>&lt;ConnectButton network=\"testnet\" strategy=\"passkey\" /&gt;</code>) or pairs with an existing <code>useFiberNode</code> hook for shared state across your app.</li>\n<li><strong><code>useFiberNode</code> hardening</strong> — Fixed StrictMode remount reliability so async state updates don’t get dropped during React effect re-runs.</li>\n<li><strong>Browser-wallet demo refresh</strong> — The demo app now uses a <code>ConnectButton</code>-first auth flow, showing how the component fits into a real wallet console.</li>\n<li><strong>React Quick Card demo</strong> — A new capability walkthrough showcasing theming, custom dropdown panels, runtime snapshot checks, and <code>FiberPayQuickCard</code> payment integration.</li>\n</ul>\n<h2><a name=\"p-24147-what-can-be-built-2\" class=\"anchor\" href=\"#p-24147-what-can-be-built-2\" aria-label=\"Heading link\"></a>What can be built</h2>\n<p>If you’re building a frontend that accepts Fiber payments, you can now add a connect button and payment card with a few imports — no need to wire up node lifecycle, passkey detection, or connection state yourself. The component handles the boilerplate and exposes hooks for customization when you need it.</p>\n<p>The interesting <a href=\"https://talk.nervos.org/t/a-paid-ai-agent-calling-experiment-via-fiber/10229\">calling-agent demo</a> is built using such an SDK. You can also check out the <a href=\"https://github.com/RetricSu/fiber-pay/tree/master/apps/browser-wallet\" rel=\"noopener nofollow ugc\">browser-wallet</a> and <a href=\"https://github.com/RetricSu/fiber-pay/tree/master/apps/react-quick-card\" rel=\"noopener nofollow ugc\">react-quick-card</a> demos in the repo, or grab the package:</p>\n<pre data-code-wrap=\"bash\"><code class=\"lang-bash\">pnpm add @fiber-pay/react\n</code></pre>\n<p>Let me know if you try it out or have feedback on the API!</p>",
          "like_count": 0,
          "quote_count": 0
        }
      ]
    },
    {
      "topic_id": 10231,
      "title": "Tiko Creator Commerce Expansion + Private Beta Validation",
      "slug": "tiko-creator-commerce-expansion-private-beta-validation",
      "url": "https://talk.nervos.org/t/tiko-creator-commerce-expansion-private-beta-validation/10231",
      "created_at": "2026-05-06T07:46:28.834000+00:00",
      "last_posted_at": "2026-05-06T07:46:28.918000+00:00",
      "category_id": 49,
      "tags": [
        "Spark-Program"
      ],
      "posters": [
        "Original Poster, Most Recent Poster"
      ],
      "recent_posts": [
        {
          "post_id": 24146,
          "post_number": 1,
          "topic_id": 10231,
          "topic_title": "Tiko Creator Commerce Expansion + Private Beta Validation",
          "topic_slug": "tiko-creator-commerce-expansion-private-beta-validation",
          "author": "DWSQUIRES",
          "created_at": "2026-05-06T07:46:28.918000+00:00",
          "updated_at": "2026-05-06T09:17:07.884000+00:00",
          "reply_to_post_number": null,
          "url": "https://talk.nervos.org/t/tiko-creator-commerce-expansion-private-beta-validation/10231/1",
          "content_text": "1. Project Name\nTiko Creator Commerce Expansion + Private Beta Validation\n2. Team / Individual Profile\nProject: Tiko\nType: Ticketing and creator commerce platform built on CKB\nLead: Indie Developer\nContact: Discord- @getigeti21\nI have already built the live ticketing foundation of Tiko, including event\nlisting, checkout, CKB testnet payment confirmation, ticket issuance, QR-based\naccess, operator check-in, and Spore-backed ownership. The current product is\nalready testable and demonstrates the core event commerce flow working end to\nend on CKB testnet.\nGitHub: GitHub - Tiko-T/Tiko · GitHub\nLive website: https://tiko-pied.vercel.app/\nCurrent test access\nEmail: admin@tiko.local\nPassword: 7V8ogwFc845anR1SqX2HLcZ3xojx0CRT\nHow to test the live product\nOpen the live website.\nSign in with the test credentials above.\nBrowse the available event listing.\nOpen the event details page and proceed to checkout.\nComplete the wallet-based payment flow on CKB testnet. (Use this to claim our internal Token Faucet )\nSubmit the payment transaction hash.\nVerify ticket issuance, QR-based access, and post-purchase ownership flow.\nI am applying for Spark support to build the next stage of Tiko: expanding it\nfrom a working ticketing product into a broader creator commerce offering, and\nthen running a short private beta to validate that expanded product with real\nusers.\n3. Project Description\nProblem\nMost event and creator platforms are fragmented.\nA creator may use:\none tool for events\none tool for digital products\none tool for memberships\none tool for fan rewards\nanother tool for authenticity or collectible ownership\nThat creates a broken experience for both creators and fans. Ticketing is\ndisconnected from post-purchase engagement, digital commerce is disconnected\nfrom community access, and blockchain ownership often exists separately from\nusable product workflows.\nSolution\nTiko combines ticketing and creator commerce into one product.\nThe existing product already handles event ticketing. This Spark proposal\nfunds the next layer: turning Tiko into a platform where creators and\noperators can sell not only event access, but also digital drops, memberships,\ncollectibles, bundles, and post-purchase rewards, while using CKB and Spore\nonly where blockchain adds real product value.\nExisting Product State\nAlready working:\nevent listing and publishing\nticket checkout flow\nCKB testnet payment confirmation\norder tracking\nticket issuance\nQR access and operator check-in\nbuyer wallet linking\nSpore-backed ownership after purchase\nWhat This Grant Will Build\nThis grant will implement the next 7 creator commerce components:\nDigital drops\nCreator-led downloadable or unlockable products such as artwork, media,\ntemplates, and exclusive files.\nMemberships and passes\nFan club style access products, VIP passes, and community memberships.\nLimited editions and collectibles\nEditioned creator items and event collectibles with Spore-backed ownership\nand provenance.\nPost-purchase fan rewards\nBonus drops, proof-of-attendance rewards, and follow-up perks tied to\nticket purchases.\nGated access products\nContent or product pages accessible only to qualifying buyers, members, or\nholders.\nPhysical merchandise with authenticity\nProduct flows for premium merchandise paired with digital authenticity\nproof.\nBundles\nCombined offers such as ticket + collectible, membership + exclusive\ncontent, or event access + replay + premium item.\n4. Why This Fits Spark\nThis project is a strong Spark fit because:\nit extends a working prototype rather than starting from zero\nit can produce a meaningful prototype expansion within 1-2 months\nit combines technical development and early user validation\nit clearly aligns with Web5 thinking: familiar Web2 UX with Web3 ownership\nwhere useful\nit solves a practical ecosystem need rather than building speculative\ninfrastructure\nThis is not a pure experiment without delivery. It is a focused expansion of\nan already working product into a more complete creator-commerce use case on\nCKB.\n5. Expected Deliverables\nBy the end of the grant period, I will provide:\nProduct Deliverables\ncreator commerce product expansion inside Tiko\nsupport for the 7 creator commerce product types listed above\nupdated creator-facing listing and management flows\nupdated buyer-facing product discovery and purchase flows\nbeta-ready private test environment\nTechnical Deliverables\npublic open-source code repository updates\ndeployment instructions\nconcise product and technical documentation\nlive or recorded product demo\nbeta flow walkthrough\nUser Validation Deliverables\nstructured private beta with early users\nuser feedback summary\nproduct learning report\nproposed post-beta iteration roadmap\n6. Estimated Completion Time\n1.2 months\nPlanned execution window: 5 weeks\n7. Clear To-Do List\nWeek 1\nfinalize creator commerce scope in product and UI terms\nadd product models and flows for digital drops, memberships, collectibles,\nand bundles\nprepare creator-facing management structure\nWeek 2\nimplement buyer-facing flows for creator commerce items\nconnect fulfillment logic for non-ticket products\nintegrate gated access and post-purchase reward hooks\nWeek 3\nimplement authenticity-oriented physical merch support at prototype level\nfinish collectible and reward issuance flow\nconnect creator commerce items into buyer library and ownership views\nWeek 4\ninternal QA and end-to-end testing on CKB testnet\nprepare private beta environment\nonboard early testers and selected creators\nWeek 5\nrun private beta\ncollect structured feedback\nsummarize user insights, friction points, and adoption signals\npublish completion report, demo, repo updates, and learnings\n8. User Testing Plan\nThis project includes a real early-validation phase, not just feature\ndelivery.\nPrivate Beta Goals\nvalidate whether creators understand the combined ticketing + creator\ncommerce offering\nvalidate whether buyers understand and adopt non-ticket creator products\ntest which of the 7 creator commerce components feel most valuable in\npractice\nidentify where blockchain-backed ownership improves user trust or retention\nPlanned Beta Scope\n3-5 early creators / operators\n20-30 users\nstructured feedback collection through direct testing and product\nobservation\ntrack usage across listing, purchase, fulfillment, and post-purchase access\nWhat I Want to Learn\nwhich creator commerce components are most commercially attractive\nwhat creators want to sell alongside tickets\nwhether buyers understand and value ownership-backed creator products\nwhere the current product flow is too complex\nwhat should become the next priority after Spark\n9. Required Funding\nRequested amount: $1,850\nSuggested funding composition\n50% USDI\n50% CKB equivalent\n10. Funding Breakdown\nDevelopment and product implementation — $1,050\ncreator commerce flows and UI expansion\nfulfillment logic for new product types\ngated access and bundle behavior\ncollectible/reward flow integration\ndeployment and environment iteration\nBeta testing and user validation — $500\nUser recruitment support\nbeta coordination\ncreator onboarding for testers\nfeedback collection and synthesis\ndemo preparation and reporting\nInfrastructure and operational costs — $300\ntest deployment operations\nstorage and service costs\nenvironment maintenance during beta\nsmall contingency for product iteration during testing\n11. Relevance to the CKB Ecosystem\nThis project is directly relevant to the CKB ecosystem because it uses CKB and\nSpore in a product-shaped way that real users can understand.\nPractical ecosystem relevance\ndemonstrates a real end-user ticketing and commerce flow on CKB\nexpands CKB usage beyond payments into ownership, authenticity, and rewards\nshows how Web2-style UX and Web3 ownership can coexist\ncreates a practical reference product for creator commerce on Nervos\nWhy CKB matters here\nCKB is not being used as decoration. It provides:\nownership-backed product fulfillment\nSpore-based provenance for creator items\na foundation for collectible, authenticity, and reward flows that make sense\nfor creators and fans\nThis is aligned with the ecosystem goal of building practical Web5 products\nthat connect real user workflows with blockchain-backed value.\n12. Alignment With Web5 Philosophy\nThis proposal strongly aligns with the Web5 direction described by Spark:\norganic combination of Web2 and Web3\nBuyers browse and purchase through a familiar web flow, while blockchain is\nused where it adds ownership and trust.\nuser-centric and human-oriented\nThe focus is not on protocol complexity but on creator products, audience\nrelationships, and useful buying experiences.\nsmall but real\nThe scope is intentionally sized for a 5-week cycle and tied to concrete\nprototype and beta outcomes.\nprototype plus feedback loop\nThe proposal includes both implementation and real user validation.\n13. Open Source Commitment\nYes.\nAll work delivered under this Spark proposal will remain open source and will\nbe published through the existing repository.\n14. Completion Outputs\nAt completion I will provide:\nupdated open-source repository\nproduct demo\nshort implementation summary\nprivate beta report\nkey user feedback findings\ncomparison of planned scope vs actual completed scope\nrecommendations for next-stage iteration\n15. What Success Looks Like\nThis Spark project will be successful if, by the end of the cycle:\nTiko supports creator-commerce flows beyond tickets\nthe 7 proposed components exist as connected prototype capabilities\ncreators can understand the broader product offering\nbuyers can successfully use at least part of the expanded commerce flow\nearly beta feedback reveals which components have strongest market pull\nthe CKB ecosystem gains a concrete creator-commerce reference product\n16. Closing Summary\nTiko already proves that ticketing on CKB can work in a real, user-facing way.\nThis grant will help prove the next step:\nthat ticketing can become the foundation for a broader creator commerce\nplatform, where creators sell access, digital drops, memberships,\ncollectibles, bundles, and authenticity-backed products from one system.",
          "content_html": "<h2><a name=\"p-24146-h-1-project-name-1\" class=\"anchor\" href=\"#p-24146-h-1-project-name-1\" aria-label=\"Heading link\"></a>1. Project Name</h2>\n<p>Tiko Creator Commerce Expansion + Private Beta Validation</p>\n<h2><a name=\"p-24146-h-2-team-individual-profile-2\" class=\"anchor\" href=\"#p-24146-h-2-team-individual-profile-2\" aria-label=\"Heading link\"></a>2. Team / Individual Profile</h2>\n<p>Project: Tiko<br>\nType: Ticketing and creator commerce platform built on CKB<br>\nLead:  Indie Developer<br>\nContact: Discord- <span class=\"mention\">@getigeti21</span></p>\n<p>I have already built the live ticketing foundation of Tiko, including event<br>\nlisting, checkout, CKB testnet payment confirmation, ticket issuance, QR-based<br>\naccess, operator check-in, and Spore-backed ownership. The current product is<br>\nalready testable and demonstrates the core event commerce flow working end to<br>\nend on CKB testnet.</p>\n<p>GitHub: <a href=\"https://github.com/Tiko-T/Tiko\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">GitHub - Tiko-T/Tiko · GitHub</a><br>\nLive website: <a href=\"https://tiko-pied.vercel.app/\" rel=\"noopener nofollow ugc\">https://tiko-pied.vercel.app/</a></p>\n<p>Current test access</p>\n<ul>\n<li>Email: admin@tiko.local</li>\n<li>Password: 7V8ogwFc845anR1SqX2HLcZ3xojx0CRT</li>\n</ul>\n<p>How to test the live product</p>\n<ol>\n<li>Open the live website.</li>\n<li>Sign in with the test credentials above.</li>\n<li>Browse the available event listing.</li>\n<li>Open the event details page and proceed to checkout.</li>\n<li>Complete the wallet-based payment flow on CKB testnet. (<a href=\"https://tiko-pied.vercel.app/faucet\" rel=\"noopener nofollow ugc\">Use this to claim our internal Token Faucet</a> )</li>\n<li>Submit the payment transaction hash.</li>\n<li>Verify ticket issuance, QR-based access, and post-purchase ownership flow.</li>\n</ol>\n<p>I am applying for Spark support to build the next stage of Tiko: expanding it<br>\nfrom a working ticketing product into a broader creator commerce offering, and<br>\nthen running a short private beta to validate that expanded product with real<br>\nusers.</p>\n<h2><a name=\"p-24146-h-3-project-description-3\" class=\"anchor\" href=\"#p-24146-h-3-project-description-3\" aria-label=\"Heading link\"></a>3. Project Description</h2>\n<h3><a name=\"p-24146-problem-4\" class=\"anchor\" href=\"#p-24146-problem-4\" aria-label=\"Heading link\"></a>Problem</h3>\n<p>Most event and creator platforms are fragmented.</p>\n<p>A creator may use:</p>\n<ul>\n<li>one tool for events</li>\n<li>one tool for digital products</li>\n<li>one tool for memberships</li>\n<li>one tool for fan rewards</li>\n<li>another tool for authenticity or collectible ownership</li>\n</ul>\n<p>That creates a broken experience for both creators and fans. Ticketing is<br>\ndisconnected from post-purchase engagement, digital commerce is disconnected<br>\nfrom community access, and blockchain ownership often exists separately from<br>\nusable product workflows.</p>\n<h3><a name=\"p-24146-solution-5\" class=\"anchor\" href=\"#p-24146-solution-5\" aria-label=\"Heading link\"></a>Solution</h3>\n<p>Tiko combines ticketing and creator commerce into one product.</p>\n<p>The existing product already handles event ticketing. This Spark proposal<br>\nfunds the next layer: turning Tiko into a platform where creators and<br>\noperators can sell not only event access, but also digital drops, memberships,<br>\ncollectibles, bundles, and post-purchase rewards, while using CKB and Spore<br>\nonly where blockchain adds real product value.</p>\n<h3><a name=\"p-24146-existing-product-state-6\" class=\"anchor\" href=\"#p-24146-existing-product-state-6\" aria-label=\"Heading link\"></a>Existing Product State</h3>\n<p>Already working:</p>\n<ul>\n<li>event listing and publishing</li>\n<li>ticket checkout flow</li>\n<li>CKB testnet payment confirmation</li>\n<li>order tracking</li>\n<li>ticket issuance</li>\n<li>QR access and operator check-in</li>\n<li>buyer wallet linking</li>\n<li>Spore-backed ownership after purchase</li>\n</ul>\n<h3><a name=\"p-24146-what-this-grant-will-build-7\" class=\"anchor\" href=\"#p-24146-what-this-grant-will-build-7\" aria-label=\"Heading link\"></a>What This Grant Will Build</h3>\n<p>This grant will implement the next 7 creator commerce components:</p>\n<ol>\n<li>Digital drops<br>\nCreator-led downloadable or unlockable products such as artwork, media,<br>\ntemplates, and exclusive files.</li>\n<li>Memberships and passes<br>\nFan club style access products, VIP passes, and community memberships.</li>\n<li>Limited editions and collectibles<br>\nEditioned creator items and event collectibles with Spore-backed ownership<br>\nand provenance.</li>\n<li>Post-purchase fan rewards<br>\nBonus drops, proof-of-attendance rewards, and follow-up perks tied to<br>\nticket purchases.</li>\n<li>Gated access products<br>\nContent or product pages accessible only to qualifying buyers, members, or<br>\nholders.</li>\n<li>Physical merchandise with authenticity<br>\nProduct flows for premium merchandise paired with digital authenticity<br>\nproof.</li>\n<li>Bundles<br>\nCombined offers such as ticket + collectible, membership + exclusive<br>\ncontent, or event access + replay + premium item.</li>\n</ol>\n<h2><a name=\"p-24146-h-4-why-this-fits-spark-8\" class=\"anchor\" href=\"#p-24146-h-4-why-this-fits-spark-8\" aria-label=\"Heading link\"></a>4. Why This Fits Spark</h2>\n<p>This project is a strong Spark fit because:</p>\n<ul>\n<li>it extends a working prototype rather than starting from zero</li>\n<li>it can produce a meaningful prototype expansion within 1-2 months</li>\n<li>it combines technical development and early user validation</li>\n<li>it clearly aligns with Web5 thinking: familiar Web2 UX with Web3 ownership<br>\nwhere useful</li>\n<li>it solves a practical ecosystem need rather than building speculative<br>\ninfrastructure</li>\n</ul>\n<p>This is not a pure experiment without delivery. It is a focused expansion of<br>\nan already working product into a more complete creator-commerce use case on<br>\nCKB.</p>\n<h2><a name=\"p-24146-h-5-expected-deliverables-9\" class=\"anchor\" href=\"#p-24146-h-5-expected-deliverables-9\" aria-label=\"Heading link\"></a>5. Expected Deliverables</h2>\n<p>By the end of the grant period, I will provide:</p>\n<h3><a name=\"p-24146-product-deliverables-10\" class=\"anchor\" href=\"#p-24146-product-deliverables-10\" aria-label=\"Heading link\"></a>Product Deliverables</h3>\n<ul>\n<li>creator commerce product expansion inside Tiko</li>\n<li>support for the 7 creator commerce product types listed above</li>\n<li>updated creator-facing listing and management flows</li>\n<li>updated buyer-facing product discovery and purchase flows</li>\n<li>beta-ready private test environment</li>\n</ul>\n<h3><a name=\"p-24146-technical-deliverables-11\" class=\"anchor\" href=\"#p-24146-technical-deliverables-11\" aria-label=\"Heading link\"></a>Technical Deliverables</h3>\n<ul>\n<li>public open-source code repository updates</li>\n<li>deployment instructions</li>\n<li>concise product and technical documentation</li>\n<li>live or recorded product demo</li>\n<li>beta flow walkthrough</li>\n</ul>\n<h3><a name=\"p-24146-user-validation-deliverables-12\" class=\"anchor\" href=\"#p-24146-user-validation-deliverables-12\" aria-label=\"Heading link\"></a>User Validation Deliverables</h3>\n<ul>\n<li>structured private beta with early users</li>\n<li>user feedback summary</li>\n<li>product learning report</li>\n<li>proposed post-beta iteration roadmap</li>\n</ul>\n<h2><a name=\"p-24146-h-6-estimated-completion-time-13\" class=\"anchor\" href=\"#p-24146-h-6-estimated-completion-time-13\" aria-label=\"Heading link\"></a>6. Estimated Completion Time</h2>\n<p>1.2 months<br>\nPlanned execution window: 5 weeks</p>\n<h2><a name=\"p-24146-h-7-clear-to-do-list-14\" class=\"anchor\" href=\"#p-24146-h-7-clear-to-do-list-14\" aria-label=\"Heading link\"></a>7. Clear To-Do List</h2>\n<h3><a name=\"p-24146-week-1-15\" class=\"anchor\" href=\"#p-24146-week-1-15\" aria-label=\"Heading link\"></a>Week 1</h3>\n<ul>\n<li>finalize creator commerce scope in product and UI terms</li>\n<li>add product models and flows for digital drops, memberships, collectibles,<br>\nand bundles</li>\n<li>prepare creator-facing management structure</li>\n</ul>\n<h3><a name=\"p-24146-week-2-16\" class=\"anchor\" href=\"#p-24146-week-2-16\" aria-label=\"Heading link\"></a>Week 2</h3>\n<ul>\n<li>implement buyer-facing flows for creator commerce items</li>\n<li>connect fulfillment logic for non-ticket products</li>\n<li>integrate gated access and post-purchase reward hooks</li>\n</ul>\n<h3><a name=\"p-24146-week-3-17\" class=\"anchor\" href=\"#p-24146-week-3-17\" aria-label=\"Heading link\"></a>Week 3</h3>\n<ul>\n<li>implement authenticity-oriented physical merch support at prototype level</li>\n<li>finish collectible and reward issuance flow</li>\n<li>connect creator commerce items into buyer library and ownership views</li>\n</ul>\n<h3><a name=\"p-24146-week-4-18\" class=\"anchor\" href=\"#p-24146-week-4-18\" aria-label=\"Heading link\"></a>Week 4</h3>\n<ul>\n<li>internal QA and end-to-end testing on CKB testnet</li>\n<li>prepare private beta environment</li>\n<li>onboard early testers and selected creators</li>\n</ul>\n<h3><a name=\"p-24146-week-5-19\" class=\"anchor\" href=\"#p-24146-week-5-19\" aria-label=\"Heading link\"></a>Week 5</h3>\n<ul>\n<li>run private beta</li>\n<li>collect structured feedback</li>\n<li>summarize user insights, friction points, and adoption signals</li>\n<li>publish completion report, demo, repo updates, and learnings</li>\n</ul>\n<h2><a name=\"p-24146-h-8-user-testing-plan-20\" class=\"anchor\" href=\"#p-24146-h-8-user-testing-plan-20\" aria-label=\"Heading link\"></a>8. User Testing Plan</h2>\n<p>This project includes a real early-validation phase, not just feature<br>\ndelivery.</p>\n<h3><a name=\"p-24146-private-beta-goals-21\" class=\"anchor\" href=\"#p-24146-private-beta-goals-21\" aria-label=\"Heading link\"></a>Private Beta Goals</h3>\n<ul>\n<li>validate whether creators understand the combined ticketing + creator<br>\ncommerce offering</li>\n<li>validate whether buyers understand and adopt non-ticket creator products</li>\n<li>test which of the 7 creator commerce components feel most valuable in<br>\npractice</li>\n<li>identify where blockchain-backed ownership improves user trust or retention</li>\n</ul>\n<h3><a name=\"p-24146-planned-beta-scope-22\" class=\"anchor\" href=\"#p-24146-planned-beta-scope-22\" aria-label=\"Heading link\"></a>Planned Beta Scope</h3>\n<ul>\n<li>3-5 early creators / operators</li>\n<li>20-30  users</li>\n<li>structured feedback collection through direct testing and product<br>\nobservation</li>\n<li>track usage across listing, purchase, fulfillment, and post-purchase access</li>\n</ul>\n<h3><a name=\"p-24146-what-i-want-to-learn-23\" class=\"anchor\" href=\"#p-24146-what-i-want-to-learn-23\" aria-label=\"Heading link\"></a>What I Want to Learn</h3>\n<ul>\n<li>which creator commerce components are most commercially attractive</li>\n<li>what creators want to sell alongside tickets</li>\n<li>whether buyers understand and value ownership-backed creator products</li>\n<li>where the current product flow is too complex</li>\n<li>what should become the next priority after Spark</li>\n</ul>\n<h2><a name=\"p-24146-h-9-required-funding-24\" class=\"anchor\" href=\"#p-24146-h-9-required-funding-24\" aria-label=\"Heading link\"></a>9. Required Funding</h2>\n<p>Requested amount: $1,850</p>\n<h3><a name=\"p-24146-suggested-funding-composition-25\" class=\"anchor\" href=\"#p-24146-suggested-funding-composition-25\" aria-label=\"Heading link\"></a>Suggested funding composition</h3>\n<ul>\n<li>50% USDI</li>\n<li>50% CKB equivalent</li>\n</ul>\n<h2><a name=\"p-24146-h-10-funding-breakdown-26\" class=\"anchor\" href=\"#p-24146-h-10-funding-breakdown-26\" aria-label=\"Heading link\"></a>10. Funding Breakdown</h2>\n<h3><a name=\"p-24146-development-and-product-implementation-1050-27\" class=\"anchor\" href=\"#p-24146-development-and-product-implementation-1050-27\" aria-label=\"Heading link\"></a>Development and product implementation — $1,050</h3>\n<ul>\n<li>creator commerce flows and UI expansion</li>\n<li>fulfillment logic for new product types</li>\n<li>gated access and bundle behavior</li>\n<li>collectible/reward flow integration</li>\n<li>deployment and environment iteration</li>\n</ul>\n<h3><a name=\"p-24146-beta-testing-and-user-validation-500-28\" class=\"anchor\" href=\"#p-24146-beta-testing-and-user-validation-500-28\" aria-label=\"Heading link\"></a>Beta testing and user validation — $500</h3>\n<ul>\n<li>User recruitment support</li>\n<li>beta coordination</li>\n<li>creator onboarding for testers</li>\n<li>feedback collection and synthesis</li>\n<li>demo preparation and reporting</li>\n</ul>\n<h3><a name=\"p-24146-infrastructure-and-operational-costs-300-29\" class=\"anchor\" href=\"#p-24146-infrastructure-and-operational-costs-300-29\" aria-label=\"Heading link\"></a>Infrastructure and operational costs — $300</h3>\n<ul>\n<li>test deployment operations</li>\n<li>storage and service costs</li>\n<li>environment maintenance during beta</li>\n<li>small contingency for product iteration during testing</li>\n</ul>\n<h2><a name=\"p-24146-h-11-relevance-to-the-ckb-ecosystem-30\" class=\"anchor\" href=\"#p-24146-h-11-relevance-to-the-ckb-ecosystem-30\" aria-label=\"Heading link\"></a>11. Relevance to the CKB Ecosystem</h2>\n<p>This project is directly relevant to the CKB ecosystem because it uses CKB and<br>\nSpore in a product-shaped way that real users can understand.</p>\n<h3><a name=\"p-24146-practical-ecosystem-relevance-31\" class=\"anchor\" href=\"#p-24146-practical-ecosystem-relevance-31\" aria-label=\"Heading link\"></a>Practical ecosystem relevance</h3>\n<ul>\n<li>demonstrates a real end-user ticketing and commerce flow on CKB</li>\n<li>expands CKB usage beyond payments into ownership, authenticity, and rewards</li>\n<li>shows how Web2-style UX and Web3 ownership can coexist</li>\n<li>creates a practical reference product for creator commerce on Nervos</li>\n</ul>\n<h3><a name=\"p-24146-why-ckb-matters-here-32\" class=\"anchor\" href=\"#p-24146-why-ckb-matters-here-32\" aria-label=\"Heading link\"></a>Why CKB matters here</h3>\n<p>CKB is not being used as decoration. It provides:</p>\n<ul>\n<li>ownership-backed product fulfillment</li>\n<li>Spore-based provenance for creator items</li>\n<li>a foundation for collectible, authenticity, and reward flows that make sense<br>\nfor creators and fans</li>\n</ul>\n<p>This is aligned with the ecosystem goal of building practical Web5 products<br>\nthat connect real user workflows with blockchain-backed value.</p>\n<h2><a name=\"p-24146-h-12-alignment-with-web5-philosophy-33\" class=\"anchor\" href=\"#p-24146-h-12-alignment-with-web5-philosophy-33\" aria-label=\"Heading link\"></a>12. Alignment With Web5 Philosophy</h2>\n<p>This proposal strongly aligns with the Web5 direction described by Spark:</p>\n<ul>\n<li>organic combination of Web2 and Web3<br>\nBuyers browse and purchase through a familiar web flow, while blockchain is<br>\nused where it adds ownership and trust.</li>\n<li>user-centric and human-oriented<br>\nThe focus is not on protocol complexity but on creator products, audience<br>\nrelationships, and useful buying experiences.</li>\n<li>small but real<br>\nThe scope is intentionally sized for a 5-week cycle and tied to concrete<br>\nprototype and beta outcomes.</li>\n<li>prototype plus feedback loop<br>\nThe proposal includes both implementation and real user validation.</li>\n</ul>\n<h2><a name=\"p-24146-h-13-open-source-commitment-34\" class=\"anchor\" href=\"#p-24146-h-13-open-source-commitment-34\" aria-label=\"Heading link\"></a>13. Open Source Commitment</h2>\n<p>Yes.<br>\nAll work delivered under this Spark proposal will remain open source and will<br>\nbe published through the existing repository.</p>\n<h2><a name=\"p-24146-h-14-completion-outputs-35\" class=\"anchor\" href=\"#p-24146-h-14-completion-outputs-35\" aria-label=\"Heading link\"></a>14. Completion Outputs</h2>\n<p>At completion I will provide:</p>\n<ul>\n<li>updated open-source repository</li>\n<li>product demo</li>\n<li>short implementation summary</li>\n<li>private beta report</li>\n<li>key user feedback findings</li>\n<li>comparison of planned scope vs actual completed scope</li>\n<li>recommendations for next-stage iteration</li>\n</ul>\n<h2><a name=\"p-24146-h-15-what-success-looks-like-36\" class=\"anchor\" href=\"#p-24146-h-15-what-success-looks-like-36\" aria-label=\"Heading link\"></a>15. What Success Looks Like</h2>\n<p>This Spark project will be successful if, by the end of the cycle:</p>\n<ul>\n<li>Tiko supports creator-commerce flows beyond tickets</li>\n<li>the 7 proposed components exist as connected prototype capabilities</li>\n<li>creators can understand the broader product offering</li>\n<li>buyers can successfully use at least part of the expanded commerce flow</li>\n<li>early beta feedback reveals which components have strongest market pull</li>\n<li>the CKB ecosystem gains a concrete creator-commerce reference product</li>\n</ul>\n<h2><a name=\"p-24146-h-16-closing-summary-37\" class=\"anchor\" href=\"#p-24146-h-16-closing-summary-37\" aria-label=\"Heading link\"></a>16. Closing Summary</h2>\n<p>Tiko already proves that ticketing on CKB can work in a real, user-facing way.</p>\n<p>This grant will help prove the next step:<br>\nthat ticketing can become the foundation for a broader creator commerce<br>\nplatform, where creators sell access, digital drops, memberships,<br>\ncollectibles, bundles, and authenticity-backed products from one system.</p>",
          "like_count": 0,
          "quote_count": 0
        }
      ]
    },
    {
      "topic_id": 10229,
      "title": "A Paid AI Agent Calling Experiment via Fiber",
      "slug": "a-paid-ai-agent-calling-experiment-via-fiber",
      "url": "https://talk.nervos.org/t/a-paid-ai-agent-calling-experiment-via-fiber/10229",
      "created_at": "2026-05-06T04:13:14.026000+00:00",
      "last_posted_at": "2026-05-06T07:31:14.664000+00:00",
      "category_id": 64,
      "tags": [],
      "posters": [
        "Original Poster",
        "Frequent Poster",
        "Most Recent Poster"
      ],
      "recent_posts": [
        {
          "post_id": 24140,
          "post_number": 1,
          "topic_id": 10229,
          "topic_title": "A Paid AI Agent Calling Experiment via Fiber",
          "topic_slug": "a-paid-ai-agent-calling-experiment-via-fiber",
          "author": "RetricSu",
          "created_at": "2026-05-06T04:13:14.078000+00:00",
          "updated_at": "2026-05-06T04:18:39.289000+00:00",
          "reply_to_post_number": null,
          "url": "https://talk.nervos.org/t/a-paid-ai-agent-calling-experiment-via-fiber/10229/1",
          "content_text": "这是我最近比较着迷的一个方向：通过 fiber 在浏览器里付费调用别人电脑上的 opencode/claude code/codex 这些 Agent，去执行某些任务。有点像 AI 公司推出的云端 Agent，用户可以直接在云服务器上调用去执行任务，不用自己在本地部署这些 Agent，对于不懂技术的用户来说体验会比较简单。\nimage2864×1238 245 KB\n我们这个尝试不太一样的地方是，我们的后端是一个开放服务，不是某个中心化的公司来提供这样的云端 Agent，而是分布式的，每个人都能提供这样的服务。调用 Agent 的用户和提供 Agent 的用户通过 Fiber 做即时的结算，目前我的尝试比较简单，是按次来收费的，调用一次就收一次固定的费用。后面如果做得更精细一点，应该是按 input 和 output token 的用量来收取费用。这个在技术上不是问题，只是一个适配的问题。\n我做这个的一个目的，是想看看是不是能解决 LLM API 中转的一些问题。我知道目前有许多 API 中转站，用来给某些无法享受 Claude / codex 的用户提供相应的 API。API 这种模式很多时候要依赖中转站的质量，同时最上游的 LLM 也比较容易通过一些检查去查到 API 滥用/转用的情况。如果是直接调用 Agent 的话（比如 opencode/ claude code），应该更容易分享，平台也更难检测，因为理论上我也还是在用 claude code 的 CLI 去使用 claude，只不过是别的用户通过我的电脑来使用而已。而且因为这个服务是去中心化的，由许多用户来自主提供，应该更不容易被封禁？最后还有个好处是，Agent 的能力一直在不停进化，用户只要升级 Agent 的版本就能跟上最新的能力，不需要我们这个平台做任何的改变，模式更灵活。\n说回 fiber。这个尝试我从头到尾做了比较完整的实现，想提供完善的用户体验，不想只是做可行性验证而已。按次付费我主要是通过 l402 的风格来实现的，这部分我借助 fiber-pay 很容易就跑起来了，我这个服务甚至不需要自己写一个专门的后端程序，只是使用 fiber-pay 的 CLI 跑一跑命令而已。更多的问题其实是在 agent 的 runtime 层的封装上。比如，需要使用 boxlite 这样的容器去隔离一个环境，让 agent 能比较安全的在自己的电脑上运行，这样确保提供服务的人不会被某些恶意用户攻击；还需要能去支持调用不同的 agent，这部分借助于 openclaw 提供的 acpx 库可以节省不少力气；最后还需要考虑提供 Agent 的用户不会把敏感信息泄露出来的（比如 Agent 配置的 API key），这部分我做了一些容器内外的环境变量替换来规避这个问题。总的来说，这样一个服务，更多的工作都落在业务层面上， Fiber 的部分反而是里面比较容易解决的。这应该是比较合理的现象。\n我觉得如果我们能从一个比较小的、具体的、真实存在的问题出发，通过 Fiber 尝试一些解决方案，可能我们会找到更多有潜力的应用方向。同时，Fiber 的 wasm 能力可以提供很不错的用户体验，尤其是跟 passkey 结合在一起，这在通道网络技术竞争中可能是一个不错的差异化的点。我把我这次尝试的应用分享在这里：https://calling-agent-kappa.vercel.app/ 感兴趣的朋友可以试用一下，目前服务的 Agent 是我在家里的一台小主机上跑的 opencode，LLM 是用的 kimi 模型，付费暂时还是基于 fiber 测试网在跑，每次调用 0.1 CKB。如果各位觉得这个想法值得探索，我们可以部署到主网上去跑，我很期待能有用户愿意去跑 claude code 这样的 agent 服务，至少作为需求方，我现在还没法完整的使用 claude code 这样的 Agent，挺需要有类似的服务。我相信如果定价合适的话，应该是对提供商和用户双方都有利的情况。",
          "content_html": "<p>这是我最近比较着迷的一个方向：通过 fiber 在浏览器里付费调用别人电脑上的 opencode/claude code/codex 这些 Agent，去执行某些任务。有点像 AI 公司推出的云端 Agent，用户可以直接在云服务器上调用去执行任务，不用自己在本地部署这些 Agent，对于不懂技术的用户来说体验会比较简单。</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/4/4b2cb2ae998d43fe46656d49e820834b910384e5.png\" data-download-href=\"https://talk.nervos.org/uploads/default/4b2cb2ae998d43fe46656d49e820834b910384e5\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/4/4b2cb2ae998d43fe46656d49e820834b910384e5_2_690x298.png\" alt=\"image\" data-base62-sha1=\"aJ1zFhRy0Jhoxv8kwqYNhKk7sih\" width=\"690\" height=\"298\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/4/4b2cb2ae998d43fe46656d49e820834b910384e5_2_690x298.png, https://talk.nervos.org/uploads/default/optimized/2X/4/4b2cb2ae998d43fe46656d49e820834b910384e5_2_1035x447.png 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/4/4b2cb2ae998d43fe46656d49e820834b910384e5_2_1380x596.png 2x\" data-dominant-color=\"17181A\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">2864×1238 245 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p>我们这个尝试不太一样的地方是，我们的后端是一个开放服务，不是某个中心化的公司来提供这样的云端 Agent，而是分布式的，每个人都能提供这样的服务。调用 Agent 的用户和提供 Agent 的用户通过 Fiber 做即时的结算，目前我的尝试比较简单，是按次来收费的，调用一次就收一次固定的费用。后面如果做得更精细一点，应该是按 input 和 output token 的用量来收取费用。这个在技术上不是问题，只是一个适配的问题。</p>\n<p>我做这个的一个目的，是想看看是不是能解决 LLM API 中转的一些问题。我知道目前有许多 API 中转站，用来给某些无法享受 Claude / codex 的用户提供相应的 API。API 这种模式很多时候要依赖中转站的质量，同时最上游的 LLM 也比较容易通过一些检查去查到 API 滥用/转用的情况。如果是直接调用 Agent 的话（比如 opencode/ claude code），应该更容易分享，平台也更难检测，因为理论上我也还是在用 claude code 的 CLI 去使用 claude，只不过是别的用户通过我的电脑来使用而已。而且因为这个服务是去中心化的，由许多用户来自主提供，应该更不容易被封禁？最后还有个好处是，Agent 的能力一直在不停进化，用户只要升级 Agent 的版本就能跟上最新的能力，不需要我们这个平台做任何的改变，模式更灵活。</p>\n<p>说回 fiber。这个尝试我从头到尾做了比较完整的实现，想提供完善的用户体验，不想只是做可行性验证而已。按次付费我主要是通过 l402 的风格来实现的，这部分我借助 <a href=\"https://talk.nervos.org/t/fiber-pay-an-ai-friendly-cli-for-fiber-network/9974/6\">fiber-pay</a> 很容易就跑起来了，我这个服务甚至不需要自己写一个专门的后端程序，只是使用 fiber-pay 的 CLI 跑一跑命令而已。更多的问题其实是在 agent 的 runtime 层的封装上。比如，需要使用 boxlite 这样的容器去隔离一个环境，让 agent 能比较安全的在自己的电脑上运行，这样确保提供服务的人不会被某些恶意用户攻击；还需要能去支持调用不同的 agent，这部分借助于 openclaw 提供的 acpx 库可以节省不少力气；最后还需要考虑提供 Agent 的用户不会把敏感信息泄露出来的（比如 Agent 配置的 API key），这部分我做了一些容器内外的环境变量替换来规避这个问题。总的来说，这样一个服务，更多的工作都落在业务层面上， Fiber 的部分反而是里面比较容易解决的。这应该是比较合理的现象。</p>\n<p>我觉得如果我们能从一个比较小的、具体的、真实存在的问题出发，通过 Fiber 尝试一些解决方案，可能我们会找到更多有潜力的应用方向。同时，Fiber 的 wasm 能力可以提供很不错的用户体验，尤其是跟 passkey 结合在一起，这在通道网络技术竞争中可能是一个不错的差异化的点。我把我这次尝试的应用分享在这里：<a href=\"https://calling-agent-kappa.vercel.app/\" rel=\"noopener nofollow ugc\">https://calling-agent-kappa.vercel.app/</a> 感兴趣的朋友可以试用一下，目前服务的 Agent 是我在家里的一台小主机上跑的 opencode，LLM 是用的 kimi 模型，付费暂时还是基于 fiber 测试网在跑，每次调用 0.1 CKB。如果各位觉得这个想法值得探索，我们可以部署到主网上去跑，我很期待能有用户愿意去跑 claude code 这样的 agent 服务，至少作为需求方，我现在还没法完整的使用 claude code 这样的 Agent，挺需要有类似的服务。我相信如果定价合适的话，应该是对提供商和用户双方都有利的情况。</p>",
          "like_count": 0,
          "quote_count": 0
        },
        {
          "post_id": 24141,
          "post_number": 2,
          "topic_id": 10229,
          "topic_title": "A Paid AI Agent Calling Experiment via Fiber",
          "topic_slug": "a-paid-ai-agent-calling-experiment-via-fiber",
          "author": "janx",
          "created_at": "2026-05-06T04:45:37.393000+00:00",
          "updated_at": "2026-05-06T04:45:37.393000+00:00",
          "reply_to_post_number": null,
          "url": "https://talk.nervos.org/t/a-paid-ai-agent-calling-experiment-via-fiber/10229/2",
          "content_text": "Love the UX presented by “running a fiber node in browser” + integrated passkey wallet - it’s rare to be both smooth and trustless.",
          "content_html": "<p>Love the UX presented by “running a fiber node in browser” + integrated passkey wallet - it’s rare to be both smooth and trustless.</p>",
          "like_count": 0,
          "quote_count": 0
        },
        {
          "post_id": 24142,
          "post_number": 3,
          "topic_id": 10229,
          "topic_title": "A Paid AI Agent Calling Experiment via Fiber",
          "topic_slug": "a-paid-ai-agent-calling-experiment-via-fiber",
          "author": "ArthurZhang",
          "created_at": "2026-05-06T05:57:10.576000+00:00",
          "updated_at": "2026-05-06T05:57:10.576000+00:00",
          "reply_to_post_number": null,
          "url": "https://talk.nervos.org/t/a-paid-ai-agent-calling-experiment-via-fiber/10229/3",
          "content_text": "This is a really solid end-to-end implementation. A friendly thought on the architecture though, instead of wrapping each agent manually, why not just look at how popular platforms handle MCP + Skills? If the goal is to build an open network, migrating to an MCP + Skills architecture might greatly improve extensibility.",
          "content_html": "<p>This is a really solid end-to-end implementation. A friendly thought on the architecture though, instead of wrapping each agent manually, why not just look at how popular platforms handle MCP + Skills? If the goal is to build an open network, migrating to an MCP + Skills architecture might greatly improve extensibility.</p>",
          "like_count": 0,
          "quote_count": 0
        },
        {
          "post_id": 24143,
          "post_number": 4,
          "topic_id": 10229,
          "topic_title": "A Paid AI Agent Calling Experiment via Fiber",
          "topic_slug": "a-paid-ai-agent-calling-experiment-via-fiber",
          "author": "RetricSu",
          "created_at": "2026-05-06T06:24:22.903000+00:00",
          "updated_at": "2026-05-06T06:24:22.903000+00:00",
          "reply_to_post_number": 3,
          "url": "https://talk.nervos.org/t/a-paid-ai-agent-calling-experiment-via-fiber/10229/4",
          "content_text": "如果我没理解错的话，mcp/skills 是更偏配套给 Agent 用的协议，目标是让 Agent（大脑）可以使用各种工具（手脚），通常并不直接用来调用 Agent ？\n调用 Agent 的协议主要是A2A (google 提出的), ACP(Agent Communication Protocol) (IBM提出的，目前是 A2A 的一部份)、ACP(Agent Client Protocol)(zed提出的) 这类协议。目前我们并不是手动封装每一种agent，而是用的 acpx 这个无头 CLI 工具，acpx 是通过 CLI 的形式实现的最后一种 ACP 协议的客户端。\n另外提一句，CLI/MCP/Skills 都是有趣的形式，彼此都在发挥作用，这里我选择 CLI 在实现操控 Agent 上是更容易适配的，考虑到大部分 Agent 本身都是 CLI 客户端的形式。",
          "content_html": "<p>如果我没理解错的话，mcp/skills 是更偏配套给 Agent 用的协议，目标是让 Agent（大脑）可以使用各种工具（手脚），通常并不直接用来调用 Agent ？</p>\n<p>调用 Agent 的协议主要是<a href=\"https://a2a-protocol.org/latest/\" rel=\"noopener nofollow ugc\">A2A</a> (google 提出的),  <a href=\"https://agentcommunicationprotocol.dev/introduction/welcome\" rel=\"noopener nofollow ugc\">ACP(Agent Communication Protocol)</a> (IBM提出的，目前是 A2A 的一部份)、<a href=\"https://agentclientprotocol.com/get-started/introduction\" rel=\"noopener nofollow ugc\">ACP(Agent Client Protocol)</a>(zed提出的) 这类协议。目前我们并不是手动封装每一种agent，而是用的 <a href=\"https://github.com/openclaw/acpx\" rel=\"noopener nofollow ugc\">acpx</a> 这个无头 CLI 工具，acpx 是通过 CLI 的形式实现的最后一种 ACP 协议的客户端。</p>\n<p>另外提一句，CLI/MCP/Skills 都是有趣的形式，彼此都在发挥作用，这里我选择 CLI 在实现操控 Agent 上是更容易适配的，考虑到大部分 Agent 本身都是 CLI 客户端的形式。</p>",
          "like_count": 0,
          "quote_count": 0
        },
        {
          "post_id": 24145,
          "post_number": 5,
          "topic_id": 10229,
          "topic_title": "A Paid AI Agent Calling Experiment via Fiber",
          "topic_slug": "a-paid-ai-agent-calling-experiment-via-fiber",
          "author": "ArthurZhang",
          "created_at": "2026-05-06T07:31:14.664000+00:00",
          "updated_at": "2026-05-06T07:31:14.664000+00:00",
          "reply_to_post_number": 4,
          "url": "https://talk.nervos.org/t/a-paid-ai-agent-calling-experiment-via-fiber/10229/5",
          "content_text": "Thanks for the clarification, you’re right. i just realised that one could theoretically wrap an Agent as an MCP Server to expose it indirectly, but that’s a workaround,and semantically awkward, since an MCP Server is a tool called by an Agent, not an Agent called by a user.",
          "content_html": "<p>Thanks for the clarification, you’re right. i just realised that one could  theoretically wrap an Agent as an MCP Server to expose it indirectly, but that’s a workaround,and semantically awkward, since an MCP Server is a tool called by an Agent, not an Agent called by  a user.</p>",
          "like_count": 0,
          "quote_count": 0
        }
      ]
    },
    {
      "topic_id": 10228,
      "title": "DAO Structures and the future of Ai agents (aspirational piece)",
      "slug": "dao-structures-and-the-future-of-ai-agents-aspirational-piece",
      "url": "https://talk.nervos.org/t/dao-structures-and-the-future-of-ai-agents-aspirational-piece/10228",
      "created_at": "2026-05-05T20:28:13.169000+00:00",
      "last_posted_at": "2026-05-06T03:05:00.511000+00:00",
      "category_id": 31,
      "tags": [],
      "posters": [
        "Original Poster",
        "Frequent Poster",
        "Frequent Poster",
        "Most Recent Poster"
      ],
      "recent_posts": [
        {
          "post_id": 24135,
          "post_number": 1,
          "topic_id": 10228,
          "topic_title": "DAO Structures and the future of Ai agents (aspirational piece)",
          "topic_slug": "dao-structures-and-the-future-of-ai-agents-aspirational-piece",
          "author": "Eyeam",
          "created_at": "2026-05-05T20:28:13.233000+00:00",
          "updated_at": "2026-05-05T22:25:06.625000+00:00",
          "reply_to_post_number": null,
          "url": "https://talk.nervos.org/t/dao-structures-and-the-future-of-ai-agents-aspirational-piece/10228/1",
          "content_text": "How can Nervos CKB Utilise AI agents?\nMy Personal Theory on how a CKB DAO should/could function in the future.\nThis is a disclaimer. This is not going to be implemented in anyway to any current DAO design.\nAI agents are something I’ve been predicting for the last few years now for DAO usage. I’ve even enjoyed a debate or two with Jan Xie, the architect of Nervos CKB during a jaunt to Thailand about how AI agents could/should be used for DAO’s in the future. I’ve spoken of them to various people within the ecosystem at different times. They are needed, (in my opinion) to smooth out the whole process of regulating human interaction, social cohesion, and be conducive to efficient governance and financial settings regarding proposals within a CKB community DAO. They can also add less party political aspects that often are toxic in human competition.\nAfter experiencing and seeing many problems in many current DAO’s (including the CKB community DAO), I’ve decided to explain why my interest has only heightened in the journey for DAO’s to be regulated by AI agents. This is all THEORETICAL rambling, but let’s call them CK Agents for now. (Common Knowledge Agents)\nCK agents have the chance to significantly enhance the maintenance and operation of the CKB community DAO, whilst preserving mutual community governance as the ultimate authority.\nThat ultimate authority is the core value we must strive towards for CKB. The ability to trust parties seems like a basic thing to take for granted, but a POW blockchain is all about removing trust assumptions in general (or certainly lowering them) and personally I’m all for that process!\nThe CK agents will be tireless, programmable extensions of the community, handling repetitive, data-heavy and time-sensitive tasks that humans find burdensome, whilst preventing the seizing of control by the CK agents themselves.\nThis keeps the DAO truly “autonomous” (it puts the “A” in DAO) while making it more efficient, scalable, and resilient to outside influence or bad actors. Until this system is applied, I refuse to call it a DAO. It should just be called a ‘DO’ currently.\nIn practice, a CK agent is an autonomous software entity (powered by LLMs, reinforcement learning, or multi-CK agent systems) that interact directly with the CKB blockchain, smart contracts, oracles, and off-chain data.\nThey can be deployed, updated, or revoked through standard CKB DAO governance votes, to ensure alignment with the Nervos communities needs and will.\nComputations might mainly be run off-chain (for cost/speed) with cryptographic proofs (like zero-knowledge proofs), verified on-chain for transparency for all to acknowledge.\nBelow are a few paradigms of how these agents could affect change for the Nervos CKB community.\nStreamlining Governance\nProposal summarisation and analysis: CK agents could read long proposals, forum discussions and on-chain data. They could generate neutral summaries, simulate financial impacts of the treasury, could include market efficiency of payout periods, flag unseen risks, and could even respond with counter-proposals/refinements based on the CKB community guidelines.\nThe problem with smaller community DAO’s in my experience is voter apathy due to technical limitations of the voter. By making complex decisions easier to understand, CKB holders are often thrown into a minefield of complexity for the community, especially when timelines and technical details are lost upon them. (A current common occurrence) Not to be confused with a limited voting community. (A whole separate argument not solved by CK Agents)\nIntelligent voting delegation: CKB holders could potentially create personalised “CK voting agents” encoded with their own preferences (e.g. risk tolerance, priorities via simple questionnaires). The CK agent then votes automatically on proposals that match those rules. Our token holders could also retain or override rights at any time. This restricts the forcing of constant human involvement. If the community members cannot create these agents, a proposal could be made for a CK Agent creator dApp that allows them to create their own, very easily at the push of a button. (perhaps this is a game changing dApp that could be utilised across the whole DAO space and not just CKB’s).\nOutcome simulation and recommendations: Before a community vote, CK agents could potentially run a scenario model (e.g., “What if this treasury allocation passes?”) using historical CKB DAO data and market signals. They can provide interpretable reports to inform voters so they do not remain ill informed.\nThe CKB DAO voters can act on the agent’s core logic, boundaries, or model updates which of course will update by voting as the DAO matures in the legal framework. We will come to legal frameworks later.\nThe CKB Treasury Fund\nTreasury optimisation: CK agents could potentially handle any foundation or projects payrolls, CKB grants, or disperse private proposal pay outs once approved. This reduces the need for a CKB human internal committee, it may reduce the need for human friction in decision making, reduce the need to pay any committee members, and become more sustainable long term. In fact it could reduce the bus factor for Nervos CKB or any projects looking to become long term, or even help out running a small team for a developer who is lacking in man power (What is the bus factor?).\nRoutine execution: CK Agents could trigger smart contract actions like CKB/iCKB token distributions, SPORE mints, or cross-chain bridges such as the Rosen Bridge. In the event CKB becomes DeFi-focused, they could potentially perform arbitrage, staking, or lending autonomously without human trust. CKB is modelled for decentralisation so it’s the perfect use case.\nAdministrative tasks: They could schedule community or dApp team meetings, manage CKB contributor bounties, or generate reports ensuring humans need not create high-level strategies. This reduces the risk of bad decision making and bases it on projected game theories.\nSecurity, and Compliance\nReal-time oversight: CK Agents could scan transactions for anomalies, hacks, or compliance issues (e.g. unusual outflows/Tx’s), automatically pause contracts or alert the community through on-chain notifications 24 hours a day through whatever social or device you prefer.\nRisk mitigation: They could reconcile accounts, enforce rules ( e.g. multi-sig requirements), and provide predictive insights on potential vulnerabilities within the CKB DAO structure, and act as oracles to alert community members when they are venturing into bad faith actions, but not censorship. Transparency is still and will always be needed.\nDispute resolution support: In some setups, CK agents could analyse evidence neutrally for the community arbitration. A very valuable process to remove emotion from decision making, but not entirely, because ultimately DAO’s often create controversy as we currently see on Talk.Nervos.org from time to time with proposals being ill thought out or controversial to spending.\nCommunity Engagement\nDiscussion facilitation: They could summarise social media, Discord and Talk.nervos forum threads, identify consensus, or moderate spam while surfacing diverse opinions/voices and bringing them to the forefront of the debate.\nIdea generation: CK Agents could brainstorm CKB proposals based on community sentiment, external trends, or bring forth exciting new ideas made from communities historical inputs, then be refined with human scepticism. Most of the time LLM’s don’t invent things, but they’re great at aggregating information to help generate more informed ideas. A CK agent will write your proposal for you with the right prompts, allowing you to bridge the ‘basic member’ to the experts. (with ease)\nSwarm intelligence DAO’s: AI agents from different DAOs could act as “liaisons,” enabling automated collaboration and joint ventures, without manual coordination. This could create a network of interconnected DAOs, either interoperable with other blockchains or multiple DAOs within the CKB ecosphere.\nExperimental Usecases\nCKB developers could create tokenised autonomous agents that operate as “CKB community DAO members” with their own Neuron wallet or with decision rights, bounded by governance. An interesting game changing prospect for Nervos CKB.\nThere could be Multi-CK agent systems for complex tasks where they work in unison together to perform tasks that all slot together like an overall process, much like a ‘Jigsaw’. Modular, but running in sync.\nThese CK agents should be open sourced, and in the era of quantum computing, they will no doubt have to adapt and evolve to be quantum resistant to remain future proof. A general ethos of CKB’s flexibility and tenacious personality.\nThey should be run on decentralised computer networks to avoid single points of failure and ideally they could be upgraded through governance proposals and voted on by the CKB community, whilst having on-chain logs of actions, verifiable outputs (e.g. using ZKP proofs) and work only within the boundaries of the communities decision making.\nRegular reviews and audits would have to be applied so they align with ongoing updates to the Nervos CKB ethos.\nThe Legal aspect\nIn theory DAO’s could pose legal risks and problems. Whilst CK agents cannot remove all the grey areas in an ever evolving regulatory framework, they could meaningfully mitigate risks, reduce exposure to grey areas, and strengthen legal defence when deployed as community-governed tools.\nThey would act as regulatory technology layers, automation monitoring, analysis, and enforcement, while keeping the CKB community in ultimate control. Mandatory.\nCK agents actions would be legally attributed to the community DAO, not the CK agents themselves. CK agents would have no legal personhood, but who knows in the future whether that will remain the case?\nAdaptive Compliance\nCK Agents could continuously scan global regulations, court rulings, and real time legislative updates.\nThey could flag potential impacts on the CKB DAO’s operations such as the treasury, CKB tokenomic emissions, cross-border Tx’s and vote adjustments*.*\nThe Nervos community could create a DAO proposal and vote for CKB developers to code “Compliance Monitoring Agents” that can track changes across global jurisdictions, reducing potential violations and reducing legal complications or friction.\nWorkflows, data privacy, and a country’s SEC rules, might also be encoded to the CK agents, with significant legal benefits that are auditable and can be proven in a court of law that members have acted in good faith within legal environments.\nPre Execution\nBefore any on-chain action (proposal execution, treasury transfer, or governance vote), CK agents could analyse it against known regulatory tests (e.g. The Howey test for securities, travel rule for virtual assets, or risk classifications).\nThey could simulate outcomes (“Would this look like an unregistered offering?”) and flag grey-area risks with transparent reason logs (provable via on-chain or ZKP-veri outputs).\nCK agents could contain tools to check for compliance red flags, and prevent malicious or risky actions from reaching to the point of execution.\nLegal benefit would shift from a reactive “battlefield” of defence, and switch to preventive guardrails. There could be human/community reviews required for high-stakes decisions, thus avoiding legal problems\nTransparency and Audit Trails\nEvery AI decision, reasoning, and data source could be logged immutably on-chain or with verifiable off-chain storage, generating tamper-proof compliance reports, impact assessments, and decision rationale.\nIn a regulatory investigation, the Community DAO could demonstrate that actions followed encoded rules aligned with community consensus, and not arbitrary human discretion.\nA legal benefit could strengthen defensibility in grey areas by proving transparency and process adherence, which is harder for regulators to challenge than opaque, human-led, decisions.\nAutonomy and Human ‘Watchers’\nCK agents might operate within strict, DAO-voted bounds (e.g. large sums having kill switches and revocation rights), combining the automation of the CK agents with human oversight.\nMulti-CK agent systems (risk assessments, amongst others, for execution, reduce single-point failures.) AI is treated as a tool under agency law principles, deployers remain accountable and they could provide legal shields for any mis-programming.\nLegal Wrappers and Insurance\nCK agents might use DAO legal entity jargon. A type of ‘Legal wrapper’. The agent could operate on behalf of the CKB community DAO which would be its own registered entity.\nCK agents might even help maintain compliance documentation like ‘fiduciary duty tracking’.\nCK agents could be encoded with dedicated insurance for AI actions, and monitoring to trigger claims or pauses in a court of law.\nLimitations Create New Grey Areas\nBut let’s not kid ourselves, CK agents (and indeed any AI agent) cannot give legal advice. They would only be recommendations that will not be legally binding due to the earlier point of CK agents not having a person-hood status. With new technology, comes lack of legislation. New risks could emerge such as ‘agent hallucinations’ (examples here), bias, or over-automation, which could create fresh liabilities. Human oversight is in fact non-negotiable to ensure these blips are correctable at code level.\nHuman rights may even come into play and have legal ramifications, which might be different in other countries and vary in legislation. AI agents are today transforming compliance in traditional finance and could be adapted to DAOs already. The CK agent will have a lot to navigate to mimic the growing trends in legalese, and may even have to be consorted by a lawyer before being programmed and installed.\nA Final Food For Thought.\nCK agents won’t erase the legal battlefield, they may carry the potential to equip the community DAO with better reconnaissance, automated defences, and verifiable records to navigate our communities decisions more safely. They could even maximise the potential for the legality to be ‘pro active’ as opposed to being ‘reactive’ in today’s modern world.\nBut we must remember, AI assists, it will not replace the legality of the CKB community DAO.\nSo let’s keep CKB decentralised and community-governed, because the key is to design agents to be the tools, and not the rulers of the Nervos DAO!!\nCK agents might just supercharge the whole DAO process and make them as autonomous as they can be, without replacing our governance and decision making entirely. The future is for the community to make.\nCK or AI agents are very much just a pipe dream currently in regards to DAO structures, but in the future I have no doubt they will have a part to play once more efficient.\nFeel free to comment or take any ideas from this to debate.\nThe question is, how far do you think we are away from these things being effective?\nDo you trust Ai even if its kept in human check?\nDoes it take the fun away and the social element?",
          "content_html": "<p><strong>How can Nervos CKB Utilise AI agents?</strong></p>\n<p><strong>My Personal Theory on how a CKB DAO should/could function in the future.</strong></p>\n<p><strong>This is a disclaimer. This is not going to be implemented in anyway to any current DAO design.</strong></p>\n<p>AI agents are something I’ve been predicting for the last few years now for DAO usage.  I’ve even enjoyed a debate or two with Jan Xie, the architect of Nervos CKB during a jaunt to Thailand about how AI agents could/should be used for DAO’s in the future. I’ve spoken of them to various people within the ecosystem at different times. They are needed, (in my opinion) to smooth out the whole process of regulating human interaction, social cohesion, and be conducive to efficient governance and financial settings regarding proposals within a CKB community DAO. They can also add less party political aspects that often are toxic in human competition.</p>\n<p>After experiencing and seeing many problems in many current DAO’s (including the CKB community DAO), I’ve decided to explain why my interest has only heightened in the journey for DAO’s to be regulated by AI agents. This is all THEORETICAL rambling, but let’s call them <strong>CK</strong> Agents for now. (<strong>C</strong>ommon <strong>K</strong>nowledge Agents)</p>\n<p><strong>CK</strong> agents have the chance to significantly enhance the maintenance and operation of the CKB community DAO, whilst <strong>preserving</strong> <strong>mutual community governance</strong> as the <strong>ultimate authority</strong>.</p>\n<p>That ultimate authority is the core value we must strive towards for CKB. The ability to trust parties seems like a basic thing to take for granted, but a POW blockchain is all about removing trust assumptions in general (or certainly lowering them) and personally I’m all for that process!</p>\n<p>The <strong>CK</strong> agents will be tireless, programmable extensions of the community, handling repetitive, data-heavy and time-sensitive tasks that humans find burdensome, whilst preventing the seizing of control by the <strong>CK</strong> agents themselves.</p>\n<p>This keeps the DAO truly “autonomous” (it puts the “A” in DAO) while making it more efficient, scalable, and resilient to outside influence or bad actors. Until this system is applied, I refuse to call it a DAO. It should just be called a ‘<strong>DO</strong>’ currently.</p>\n<p>In practice, a <strong>CK</strong> agent is an autonomous software entity (powered by LLMs, reinforcement learning, or multi-<strong>CK</strong> agent systems) that interact directly with the CKB blockchain, smart contracts, oracles, and off-chain data.</p>\n<p>They can be deployed, updated, or revoked through standard CKB DAO governance votes, to ensure alignment with the Nervos communities needs and will.</p>\n<p>Computations <strong>might</strong> mainly be run off-chain (for cost/speed) with cryptographic proofs (like zero-knowledge proofs), verified on-chain for transparency for all to acknowledge.</p>\n<p>Below are a few paradigms of how these agents <strong>could</strong> affect change for the Nervos CKB community.</p>\n<p><strong>Streamlining Governance</strong></p>\n<ul>\n<li><strong>Proposal summarisation and analysis</strong>: <strong>CK</strong> agents <strong>could</strong> read long proposals, forum discussions and on-chain data. They <strong>could</strong> generate neutral summaries, simulate financial impacts of the treasury, <strong>could</strong> include market efficiency of payout periods, flag unseen risks, and <strong>could</strong> even respond with counter-proposals/refinements based on the CKB community guidelines.</li>\n</ul>\n<p>The problem with smaller community DAO’s in my experience is voter apathy due to technical limitations of the voter. By making complex decisions easier to understand, CKB holders are often thrown into a minefield of complexity for the community, especially when timelines and technical details are lost upon them. (A current common occurrence) Not to be confused with a limited voting community. (A whole separate argument not solved by CK Agents)</p>\n<ul>\n<li>\n<p><strong>Intelligent voting delegation</strong>: CKB holders <strong>could</strong> potentially create personalised “<strong>CK</strong> voting agents” encoded with their own preferences (e.g. risk tolerance, priorities via simple questionnaires). The <strong>CK</strong> agent then votes automatically on proposals that match those rules. Our token holders could also retain or override rights at any time. This restricts the forcing of constant human involvement. If the community members cannot create these agents, a proposal <strong>could</strong> be made for a <strong>CK</strong> Agent creator dApp that allows them to create their own, very easily at the push of a button. (perhaps this is a game changing dApp that could be utilised across the whole DAO space and not just CKB’s).</p>\n</li>\n<li>\n<p><strong>Outcome simulation and recommendations</strong>: Before a community vote, <strong>CK</strong> agents could potentially run a scenario model (e.g., “What if this treasury allocation passes?”) using historical CKB DAO data and market signals. They can provide interpretable reports to inform voters so they do not remain ill informed.</p>\n</li>\n</ul>\n<p>The CKB DAO voters can act on the agent’s core logic, boundaries, or model updates which of course will update by voting as the DAO matures in the legal framework. We will come to legal frameworks later.</p>\n<p><strong>The CKB Treasury Fund</strong></p>\n<ul>\n<li>\n<p><strong>Treasury optimisation</strong>: <strong>CK</strong> agents <strong>could</strong> potentially handle any foundation or projects payrolls, CKB grants, or disperse private proposal pay outs once approved. This reduces the need for a CKB human internal committee, it may reduce the need for human friction in decision making, reduce the need to pay any committee members, and become more sustainable long term. In fact it <strong>could</strong> reduce the bus factor for Nervos CKB or any projects looking to become long term, or even help out running a small team for a developer who is lacking in man power <a href=\"https://en.wikipedia.org/wiki/Bus_factor\" rel=\"noopener nofollow ugc\">(What is the bus factor?)</a>.</p>\n</li>\n<li>\n<p><strong>Routine execution</strong>: <strong>CK</strong> Agents <strong>could</strong> trigger smart contract actions like CKB/iCKB token distributions, SPORE mints, or cross-chain bridges such as the Rosen Bridge. In the event CKB becomes DeFi-focused, they could potentially perform arbitrage, staking, or lending autonomously without human trust. CKB is modelled for decentralisation so it’s the perfect use case.</p>\n</li>\n<li>\n<p><strong>Administrative tasks</strong>: They <strong>could</strong> schedule community or dApp team meetings, manage CKB contributor bounties, or generate reports ensuring humans need not create high-level strategies. This reduces the risk of bad decision making and bases it on projected game theories.</p>\n</li>\n</ul>\n<p><strong>Security, and Compliance</strong></p>\n<ul>\n<li>\n<p><strong>Real-time oversight</strong>: <strong>CK</strong> Agents <strong>could</strong> scan transactions for anomalies, hacks, or compliance issues (e.g. unusual outflows/Tx’s), automatically pause contracts or alert the community through on-chain notifications 24 hours a day through whatever social or device you prefer.</p>\n</li>\n<li>\n<p><strong>Risk mitigation</strong>: They <strong>could</strong> reconcile accounts, enforce rules ( e.g. multi-sig requirements), and provide predictive insights on potential vulnerabilities within the CKB DAO structure, and act as oracles to alert community members when they are venturing into bad faith actions, but not censorship. Transparency is still and will always be needed.</p>\n</li>\n<li>\n<p><strong>Dispute resolution support</strong>: In some setups, <strong>CK</strong> agents <strong>could</strong> analyse evidence neutrally for the community arbitration. A very valuable process to remove emotion from decision making, but not entirely, because ultimately DAO’s often create controversy as we currently see on <a href=\"http://talk.nervos.org\">Talk.Nervos.org</a> from time to time with proposals being ill thought out or controversial to spending.</p>\n</li>\n</ul>\n<p><strong>Community Engagement</strong></p>\n<ul>\n<li>\n<p><strong>Discussion facilitation</strong>: They <strong>could</strong> summarise social media, Discord and Talk.nervos forum threads, identify consensus, or moderate spam while surfacing diverse opinions/voices and bringing them to the forefront of the debate.</p>\n</li>\n<li>\n<p><strong>Idea generation</strong>: <strong>CK</strong> Agents could brainstorm CKB proposals based on community sentiment, external trends, or bring forth exciting new ideas made from communities historical inputs, then be refined with human scepticism. Most of the time LLM’s don’t invent things, but they’re great at aggregating information to help generate more informed ideas. A <strong>CK</strong> agent will write your proposal for you with the right prompts, allowing you to bridge the ‘basic member’ to the experts. (with ease)</p>\n</li>\n<li>\n<p><strong>Swarm intelligence DAO’s</strong>: AI agents from different DAOs <strong>could</strong> act as “liaisons,” enabling automated collaboration and joint ventures, without manual coordination. This could create a network of interconnected DAOs, either interoperable with other blockchains or multiple DAOs within the CKB ecosphere.</p>\n</li>\n</ul>\n<p><strong>Experimental Usecases</strong></p>\n<ul>\n<li>\n<p>CKB developers <strong>could</strong> create tokenised autonomous agents that operate as “CKB community DAO members” with their own Neuron wallet or  with decision rights, bounded by governance. An interesting game changing prospect for Nervos CKB.</p>\n</li>\n<li>\n<p>There <strong>could</strong> be Multi-<strong>CK</strong> agent systems for complex tasks where they work in unison together to perform tasks that all slot together like an overall process, much like a ‘Jigsaw’. Modular, but running in sync.</p>\n</li>\n</ul>\n<p>These <strong>CK</strong> agents <strong>should</strong> be open sourced, and in the era of quantum computing, they will no doubt have to adapt and evolve to be quantum resistant to remain future proof. A general ethos of CKB’s flexibility and tenacious personality.</p>\n<p>They <strong>should</strong> be run on decentralised computer networks to avoid single points of failure and ideally they <strong>could</strong> be upgraded through governance proposals and voted on by the CKB community, whilst having on-chain logs of actions, verifiable outputs (e.g. using ZKP proofs) and work only within the boundaries of the communities decision making.</p>\n<p>Regular reviews and audits <strong>would</strong> have to be applied so they align with ongoing updates to the Nervos CKB ethos.</p>\n<p><strong>The Legal aspect</strong></p>\n<p>In theory DAO’s <strong>could</strong> pose legal risks and problems. Whilst <strong>CK</strong> agents cannot remove all the grey areas in an ever evolving regulatory framework, they <strong>could</strong> meaningfully mitigate risks, reduce exposure to grey areas, and strengthen legal defence when deployed as community-governed tools.</p>\n<p>They <strong>would</strong> act as regulatory technology layers, automation monitoring, analysis, and enforcement, while keeping the CKB community in ultimate control. <strong>Mandatory.</strong></p>\n<p><strong>CK</strong> agents actions <strong>would</strong> be legally attributed to the community DAO, not the <strong>CK</strong> agents themselves.  <strong>CK</strong> agents <strong>would</strong> have no legal personhood, but who knows in the future whether that will remain the case?</p>\n<p><strong>Adaptive Compliance</strong></p>\n<p><strong>CK</strong> Agents <strong>could</strong> continuously scan global regulations, court rulings, and real time legislative updates.</p>\n<p>They <strong>could</strong> flag potential impacts on the CKB DAO’s operations such as the treasury, CKB tokenomic emissions, cross-border Tx’s and vote adjustments*.*</p>\n<p>The Nervos community <strong>could</strong> create a DAO proposal and vote for CKB developers to code  <a href=\"https://www.oneadvanced.com/resources/oneadvanced-launches-uks-first-ai-agents-for-legal-compliance/\" rel=\"noopener nofollow ugc\">“Compliance Monitoring Agents”</a> that can track changes across global jurisdictions, reducing <strong>potential violations</strong> and reducing legal complications or friction.</p>\n<p>Workflows, data privacy, and a country’s SEC rules, <strong>might</strong> also be encoded to the <strong>CK</strong> agents, with significant legal benefits that are auditable and can be proven in a court of law that members have acted in good faith within legal environments.</p>\n<p><strong>Pre Execution</strong></p>\n<ul>\n<li>\n<p>Before any on-chain action (proposal execution, treasury transfer, or governance vote), <strong>CK</strong> agents <strong>could</strong> analyse it against known regulatory tests (e.g. The Howey test for securities, travel rule for virtual assets, or risk classifications).</p>\n</li>\n<li>\n<p>They <strong>could</strong> simulate outcomes (“Would this look like an unregistered offering?”) and flag grey-area risks with transparent reason logs (provable via on-chain or ZKP-veri outputs).</p>\n</li>\n<li>\n<p><strong>CK</strong> agents <strong>could</strong> contain tools to check for compliance red flags, and prevent malicious or risky actions from reaching to the point of execution.</p>\n</li>\n<li>\n<p>Legal benefit would shift from a reactive “battlefield” of defence, and switch to preventive guardrails. There <strong>could</strong> be human/community reviews required for high-stakes decisions, thus avoiding legal problems</p>\n</li>\n</ul>\n<p><strong>Transparency and Audit Trails</strong></p>\n<p>Every AI decision, reasoning, and data source <strong>could be logged</strong> immutably on-chain or with verifiable off-chain storage, generating tamper-proof compliance reports, impact assessments, and decision rationale.</p>\n<ul>\n<li>\n<p>In a regulatory investigation, the Community DAO <strong>could</strong> demonstrate that actions followed encoded rules aligned with community consensus, and not arbitrary human discretion.</p>\n</li>\n<li>\n<p>A legal benefit <strong>could</strong> strengthen defensibility in grey areas by proving transparency and process adherence, which is harder for regulators to challenge than opaque, human-led, decisions.</p>\n</li>\n</ul>\n<p><strong>Autonomy and Human ‘Watchers’</strong></p>\n<ul>\n<li>\n<p><strong>CK</strong> agents might operate within strict, DAO-voted bounds (e.g. large sums having kill switches and revocation rights), combining the automation of the CK agents with human oversight.</p>\n</li>\n<li>\n<p>Multi-<strong>CK</strong> agent systems (risk assessments, amongst others, for execution, reduce single-point failures.) AI is treated as a tool under agency law principles, deployers remain accountable and they <strong>could</strong> provide legal shields for any mis-programming.</p>\n</li>\n</ul>\n<p><strong>Legal Wrappers and Insurance</strong></p>\n<ul>\n<li>\n<p><strong>CK</strong> agents <strong>might</strong> use DAO legal entity jargon. A type of ‘Legal wrapper’. The agent <strong>could</strong> operate on behalf of the CKB community DAO which would be its own registered entity.</p>\n</li>\n<li>\n<p><strong>CK</strong> agents <strong>might</strong> even help maintain compliance documentation like ‘<a href=\"https://uk.practicallaw.thomsonreuters.com/1-107-5744?transitionType=Default&amp;contextData=(sc.Default)\" rel=\"noopener nofollow ugc\">fiduciary duty tracking’.</a></p>\n</li>\n<li>\n<p>CK agents <strong>could</strong> be encoded with dedicated insurance for AI actions, and monitoring to trigger claims or pauses in a court of law.</p>\n</li>\n</ul>\n<p><strong>Limitations Create <em>New Grey Areas</em></strong></p>\n<p>But let’s not kid ourselves, <strong>CK agents</strong> (and indeed any AI agent) cannot give legal advice. They would only be recommendations that will not be legally binding due to the earlier point of CK agents not having a person-hood status. With new technology, comes lack of legislation. <strong>New</strong> risks <strong>could</strong> emerge such as ‘agent hallucinations’ (<a href=\"https://www.evidentlyai.com/blog/ai-hallucinations-examples\" rel=\"noopener nofollow ugc\">examples here</a>), bias, or over-automation, which could create fresh liabilities. Human oversight is in fact non-negotiable to ensure these blips are correctable at code level.</p>\n<p>Human rights may even come into play and have legal ramifications, which <strong>might</strong> be different in other countries and vary in legislation.  AI agents are today transforming compliance in traditional finance and <strong>could</strong> be adapted to DAOs already. The <strong>CK</strong> agent will have a lot to navigate to mimic the growing trends in legalese, and may even have to be consorted by a lawyer before being programmed and installed.</p>\n<p><strong>A Final Food For Thought.</strong></p>\n<p><strong>CK</strong> agents won’t erase the legal battlefield, they <strong>may</strong> carry the potential to equip the community DAO with better reconnaissance, automated defences, and verifiable records to navigate our communities decisions more safely. They <strong>could</strong> even maximise the potential for the legality to be ‘pro active’ as opposed to being ‘reactive’ in today’s modern world.</p>\n<p>But we must remember, AI assists, it will not replace the legality of the CKB community DAO.</p>\n<p>So let’s keep CKB decentralised and community-governed, because the key is to design agents to be the <strong>tools,</strong> and not the <strong>rulers</strong> of the Nervos DAO!!</p>\n<p><strong>CK</strong> agents might just supercharge the whole DAO process and make them as autonomous as they can be, without replacing our governance and decision making entirely. The future is for the community to make.</p>\n<p>CK or AI agents are very much just a pipe dream currently in regards to DAO structures, but in the future I have no doubt they will have a part to play once more efficient.</p>\n<p>Feel free to comment or take any ideas from this to debate.</p>\n<p>The question is, how far do you think we are away from these things being effective?</p>\n<p>Do you trust Ai even if its kept in human check?</p>\n<p>Does it take the fun away and the social element?</p>",
          "like_count": 0,
          "quote_count": 0
        },
        {
          "post_id": 24136,
          "post_number": 2,
          "topic_id": 10228,
          "topic_title": "DAO Structures and the future of Ai agents (aspirational piece)",
          "topic_slug": "dao-structures-and-the-future-of-ai-agents-aspirational-piece",
          "author": "terrytai",
          "created_at": "2026-05-05T22:29:06.392000+00:00",
          "updated_at": "2026-05-05T22:29:06.392000+00:00",
          "reply_to_post_number": null,
          "url": "https://talk.nervos.org/t/dao-structures-and-the-future-of-ai-agents-aspirational-piece/10228/2",
          "content_text": "Nervos Talk Renewal & Governance 是一个临时特别的分类，专门用于讨论论坛最近的改版和重构。所以我暂时帮你移动到了现在的分区。感谢发帖。",
          "content_html": "<p>Nervos Talk Renewal &amp; Governance 是一个临时特别的分类，专门用于讨论论坛最近的改版和重构。所以我暂时帮你移动到了现在的分区。感谢发帖。</p>",
          "like_count": 0,
          "quote_count": 0
        },
        {
          "post_id": 24137,
          "post_number": 3,
          "topic_id": 10228,
          "topic_title": "DAO Structures and the future of Ai agents (aspirational piece)",
          "topic_slug": "dao-structures-and-the-future-of-ai-agents-aspirational-piece",
          "author": "zz_tovarishch",
          "created_at": "2026-05-05T23:08:18.323000+00:00",
          "updated_at": "2026-05-05T23:24:40.927000+00:00",
          "reply_to_post_number": null,
          "url": "https://talk.nervos.org/t/dao-structures-and-the-future-of-ai-agents-aspirational-piece/10228/3",
          "content_text": "Hi，感谢发帖分享！\n你提到的AI agent for DAO的很多用例，我觉得都和可以算是下图右上角的绿灯区域：AI有能力去做，人也有动力把对应工作交给AI去做，比如讨论促进、总结报告。而复杂任务的协同探索工具，可以对应左上角的橙色区域，是开发者值得去探索的部分。\nimage2067×2031 656 KB\n出处：https://futureofwork.saltlab.stanford.edu/ 供参考\n不过我有一点担忧的是： 智能投票委托。最近正在做一篇基于计量经济学方法论，对AI agent担任治理中的被委托者，会导致什么后果的工作。核心发现可以简单概述为，由于治理是一种需要内生的动机激励的行为，当AI agent被委托后，用户的内生动机可能被侵蚀，且这种侵蚀会外溢到被委托范围外。此外，这种侵蚀和committed-level正相关。\n举例来说，用户在DAO中将治理投票委托给AI agent后，在其他没有被AI自动化Cover的公共任务中，越是之前积极参与的用户，越可能发生退出行为。\n不过依然是早期结论，AI agent for DAO肯定是一个值得探索的领域，再次感谢您的分享",
          "content_html": "<p>Hi，感谢发帖分享！</p>\n<p>你提到的AI agent for DAO的很多用例，我觉得都和可以算是下图右上角的绿灯区域：AI有能力去做，人也有动力把对应工作交给AI去做，比如讨论促进、总结报告。而复杂任务的协同探索工具，可以对应左上角的橙色区域，是开发者值得去探索的部分。</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://talk.nervos.org/uploads/default/original/2X/1/12140f8bf6c1d17dd84b2b0f1aed7896be230de4.jpeg\" data-download-href=\"https://talk.nervos.org/uploads/default/12140f8bf6c1d17dd84b2b0f1aed7896be230de4\" title=\"image\"><img src=\"https://talk.nervos.org/uploads/default/optimized/2X/1/12140f8bf6c1d17dd84b2b0f1aed7896be230de4_2_508x500.jpeg\" alt=\"image\" data-base62-sha1=\"2zVzBVtf6crYBoPmNSH5orsfLYU\" width=\"508\" height=\"500\" srcset=\"https://talk.nervos.org/uploads/default/optimized/2X/1/12140f8bf6c1d17dd84b2b0f1aed7896be230de4_2_508x500.jpeg, https://talk.nervos.org/uploads/default/optimized/2X/1/12140f8bf6c1d17dd84b2b0f1aed7896be230de4_2_762x750.jpeg 1.5x, https://talk.nervos.org/uploads/default/optimized/2X/1/12140f8bf6c1d17dd84b2b0f1aed7896be230de4_2_1016x1000.jpeg 2x\" data-dominant-color=\"DBD6D0\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">2067×2031 656 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div><br>\n出处：<a href=\"https://futureofwork.saltlab.stanford.edu/\">https://futureofwork.saltlab.stanford.edu/</a> 供参考</p>\n<hr>\n<p>不过我有一点担忧的是： 智能投票委托。最近正在做一篇基于计量经济学方法论，对AI agent担任治理中的被委托者，会导致什么后果的工作。核心发现可以简单概述为，由于治理是一种需要内生的动机激励的行为，当AI agent被委托后，用户的内生动机可能被侵蚀，且这种侵蚀会外溢到被委托范围外。此外，这种侵蚀和committed-level正相关。</p>\n<p>举例来说，用户在DAO中将治理投票委托给AI agent后，在其他没有被AI自动化Cover的公共任务中，越是之前积极参与的用户，越可能发生退出行为。</p>\n<p>不过依然是早期结论，AI agent for DAO肯定是一个值得探索的领域，再次感谢您的分享</p>",
          "like_count": 0,
          "quote_count": 0
        },
        {
          "post_id": 24139,
          "post_number": 4,
          "topic_id": 10228,
          "topic_title": "DAO Structures and the future of Ai agents (aspirational piece)",
          "topic_slug": "dao-structures-and-the-future-of-ai-agents-aspirational-piece",
          "author": "ArthurZhang",
          "created_at": "2026-05-06T03:05:00.511000+00:00",
          "updated_at": "2026-05-06T03:05:00.511000+00:00",
          "reply_to_post_number": null,
          "url": "https://talk.nervos.org/t/dao-structures-and-the-future-of-ai-agents-aspirational-piece/10228/4",
          "content_text": "I think your vision for CK Agents cuts right to what a DAO ought to be, not just a voting portal, but a machine, not simply stateful but also smart, for collective trust. Still I would like to push the premise one layer deeper, the governance fatigue you describe, voter apathy, opaque proposals, human friction, probably cannot truly be relieved by AI if the on-chain contracts themselves remain black boxes. As Leslie Lamport’s work on formal specification relentlessly drives home, a system’s trustworthiness cannot exceed the precision of its specification. If state transitions are not explicit, deterministic, and independently verifiable, then any AI agent tasked with summarising, simulating, or arbitrating them merely interpolates guesses across obscurity. An essential prerequisite for safe on-chain agents is therefore architectural transparency.\nFrom where I stand , the Cell/UTXO model’s advantage is that it pushes state to the surface, making the ledger itself a readable specification. AI then ceases to be an oracle peering into a vault(as is does in most other models) and becomes instead a mechanical translator of already-public logic, thus reducing cognitive dimensionality without introducing epistemic risk.\nThis also loops back to your insistence on ultimate human authority. Like Hannah Arendt said, ’ Power is actualized only where word and deed have not been separated… it springs up between men when they act together and vanishes the moment they disperse’, maybe potential CK Agents should function as extensions of that space, they may automate treasury payouts, compliance scans, and dispute summaries, but only within DAO-voted bounds and by auditing a causal chain woven from transaction intents.",
          "content_html": "<p>I think your vision for CK Agents cuts right to what a DAO ought to be, not just a voting portal, but a machine, not simply stateful but also smart, for collective trust. Still I would like to push the premise one layer deeper,  the governance fatigue you describe, voter apathy, opaque proposals, human friction, probably cannot truly be relieved by AI if the on-chain contracts themselves remain black boxes.  As Leslie Lamport’s work on formal specification relentlessly drives home, a system’s trustworthiness <strong>cannot exceed the precision of its specification.</strong> If state transitions are not explicit, deterministic, and independently verifiable, then any AI agent tasked with summarising, simulating, or arbitrating them merely interpolates guesses across obscurity. An essential prerequisite for safe on-chain agents is therefore <strong>architectural transparency</strong>.</p>\n<p>From where I stand , the Cell/UTXO model’s advantage is that it pushes state to the surface, making the ledger itself a readable specification. AI then ceases to be an oracle peering into a vault(as is does in most other models) and becomes instead a mechanical translator of already-public logic, thus reducing cognitive dimensionality without introducing epistemic risk.</p>\n<p>This also loops back to your insistence on ultimate human authority. Like  Hannah Arendt said, ’ <em>Power is actualized only where word and deed have not been separated… it springs up between men when they act together and vanishes the moment they disperse</em>’, maybe potential CK Agents should function as extensions of that space, they may automate treasury payouts, compliance scans, and dispute summaries, but only within DAO-voted bounds and by auditing a causal chain woven from transaction intents.</p>",
          "like_count": 0,
          "quote_count": 0
        }
      ]
    },
    {
      "topic_id": 10218,
      "title": "[DIS] Quantir Risk Intelligence for CKB Ecosystem and Cross-Chain Monitoring",
      "slug": "dis-quantir-risk-intelligence-for-ckb-ecosystem-and-cross-chain-monitoring",
      "url": "https://talk.nervos.org/t/dis-quantir-risk-intelligence-for-ckb-ecosystem-and-cross-chain-monitoring/10218",
      "created_at": "2026-04-30T15:40:30.553000+00:00",
      "last_posted_at": "2026-05-06T00:58:37.310000+00:00",
      "category_id": 65,
      "tags": [
        "grant-RFC",
        "grants"
      ],
      "posters": [
        "Original Poster",
        "Frequent Poster",
        "Most Recent Poster"
      ],
      "recent_posts": [
        {
          "post_id": 24138,
          "post_number": 6,
          "topic_id": 10218,
          "topic_title": "[DIS] Quantir Risk Intelligence for CKB Ecosystem and Cross-Chain Monitoring",
          "topic_slug": "dis-quantir-risk-intelligence-for-ckb-ecosystem-and-cross-chain-monitoring",
          "author": "Ckroamer",
          "created_at": "2026-05-06T00:58:37.310000+00:00",
          "updated_at": "2026-05-06T00:58:37.310000+00:00",
          "reply_to_post_number": 5,
          "url": "https://talk.nervos.org/t/dis-quantir-risk-intelligence-for-ckb-ecosystem-and-cross-chain-monitoring/10218/6",
          "content_text": "Can you showcase some concrete examples about monitoring on CKB? like how do you monitor suspicious xUDT events, I’m curious about your detailed treatments.",
          "content_html": "<p>Can you showcase some concrete examples about monitoring on CKB? like how do you monitor suspicious xUDT events, I’m curious about your detailed treatments.</p>",
          "like_count": 0,
          "quote_count": 0
        }
      ]
    }
  ]
}