Audit remediation: tests, isolate hardening, framework fixes, error hygiene#465
Audit remediation: tests, isolate hardening, framework fixes, error hygiene#465AlemTuzlak wants to merge 9 commits intomainfrom
Conversation
The package had 13 source files with zero unit tests. Added 116 tests across 9 files covering trust strategies, memory + file storage, skill management tools (including name-validation boundaries for register_skill), bindings, skills-to-tools execution with mocked isolate driver, type generation, the system-prompt renderer, and skill selection with a mocked chat adapter.
…n tool-name handling
The Worker was documented, commented, and configured as if unsafe_eval
only worked in wrangler dev. Updated src/worker/index.ts, wrangler.toml,
and the README to describe the production path (Cloudflare accounts
with the unsafe_eval binding enabled), and pointed users to auth /
rate limiting as the real production gate.
Also added assertSafeToolName in wrap-code.ts to reject tool names that
would break out of the generated function identifier, e.g.
"foo'); process.exit(1); (function bar() {". Added tests covering
quotes, backticks, whitespace, semicolons, newlines, empty strings,
leading digits, and the valid identifier shapes.
Added a new escape-attempts.test.ts covering JSON.stringify escaping
of adversarial tool-result values and verifying the result lands in
a plain object-literal assignment (never a template literal).
Tool handling was inlined inside the text adapter with raw type casts. Extracted into src/tools/function-tool.ts + tool-converter.ts matching the structure used by ai-openai, ai-anthropic, ai-grok, and ai-groq. Re-exported as convertFunctionToolToAdapterFormat and convertToolsToProviderFormat from the package index. Added 29 unit tests covering the converter, client utilities (createOllamaClient, getOllamaHostFromEnv, generateId, estimateTokens), and the text adapter's streaming behaviour: RUN/TEXT_MESSAGE/tool-call lifecycle events, id synthesis when Ollama omits a tool-call id, tool forwarding to the SDK in provider format, and structured-output JSON parsing with error wrapping. The package previously had 73 source files and zero unit tests.
onResponse, onChunk, and onCustomEvent were captured by reference at ChatClient creation time. When a parent component re-rendered with fresh closures, the client kept calling the originals. - ai-react / ai-preact: wrap the three callbacks the same way onFinish/onError already were, reading from optionsRef.current at call time. - ai-vue / ai-solid: wrap the callbacks to read options.xxx at call time. This also fixes a subtler bug where using client.updateOptions to swap callbacks could not clear them (the "!== undefined" guard silently skipped undefined values). - ai-svelte: documented the capture-at-creation behaviour — Svelte's createChat runs once per instance and there's no per-render hook, so callbacks are frozen unless the caller mutates the options object or calls client.updateOptions imperatively. Added a React regression test that rerenders with a new onChunk and verifies the new callback fires while the original does not.
…rrors The three catch blocks that convert thrown values into RUN_ERROR events (stream-to-response.ts, activities/stream-generation-result.ts, activities/generateVideo/index.ts) were using catch(error: any) and dereferencing .message / .code without checks. Added a shared toRunErrorPayload(error, fallback) helper under activities/ that accepts Error instances, plain objects with message/code fields, or bare strings, and funnels all three sites through it with a per-site fallback message. Removed four console.error calls in the OpenAI text adapter's chatStream that dumped the full error object to stdout. SDK errors can carry the original request (including auth headers), so the library no longer logs them; upstream callers should convert errors into structured events. Added 8 unit tests for toRunErrorPayload including a leaked-properties test confirming the helper does not expose extra fields.
… drivers Covers the attack surface a malicious skill / code-mode snippet might probe: process/require/fetch should be unavailable, prototype pollution must not leak to the host or between contexts, synchronous CPU-spin loops must be interrupted by the timeout (not hang), and Function- constructor escape attempts must execute inside the isolate (never returning a real host process object). QuickJS also gets a test that globalThis mutations inside one context do not bleed into a sibling context.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (8)
✅ Files skipped from review due to trivial changes (5)
🚧 Files skipped from review as they are similar to previous changes (2)
📝 WalkthroughWalkthroughAdds a shared error-payload helper, narrows catch typings to Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🚀 Changeset Version Preview11 package(s) bumped directly, 17 bumped as dependents. 🟩 Patch bumps
|
|
View your CI Pipeline Execution ↗ for commit 1c196da
☁️ Nx Cloud last updated this comment at |
|
View your CI Pipeline Execution ↗ for commit 1c196da ☁️ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-code-mode
@tanstack/ai-code-mode-skills
@tanstack/ai-devtools-core
@tanstack/ai-elevenlabs
@tanstack/ai-event-client
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-groq
@tanstack/ai-isolate-cloudflare
@tanstack/ai-isolate-node
@tanstack/ai-isolate-quickjs
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/typescript/ai-isolate-cloudflare/README.md (1)
73-86:⚠️ Potential issue | 🟡 MinorFix the option numbering sequence.
The options are numbered 1, 3, 2 instead of 1, 2, 3. This creates confusion when users are following the setup instructions sequentially.
📝 Proposed fix for option numbering
-### Option 3: Production deployment +### Option 2: Production deployment ```bash wrangler deployThe same
wrangler.toml[[unsafe.bindings]]configuration applies in production. Deploying requires that your Cloudflare account hasunsafe_evalenabled; without it, the Worker returns anUnsafeEvalNotAvailableerror. Because this Worker executes LLM-generated code, only deploy it behind authentication, rate limiting, and an allow-listed origin.-### Option 2: Wrangler remote dev
+### Option 3: Wrangler remote dev</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@packages/typescript/ai-isolate-cloudflare/README.mdaround lines 73 - 86,
The headings are out of sequence: rename the markdown headings so the steps read
1→2→3; specifically change "### Option 3: Production deployment" to "### Option
2: Production deployment" or swap with "### Option 2: Wrangler remote dev" so
that "### Option 2: Wrangler remote dev" becomes "### Option 3: Wrangler remote
dev", ensuring the sequence is 1, 2, 3; update only the two heading lines ("###
Option 3: Production deployment" and "### Option 2: Wrangler remote dev") in the
README.md to reflect the correct numeric order.</details> </blockquote></details> <details> <summary>packages/typescript/ai-preact/src/use-chat.ts (1)</summary><blockquote> `63-74`: _⚠️ Potential issue_ | _🔴 Critical_ **Add `onCustomEvent` wrapper to match React and Vue implementations.** Preact's `UseChatOptions` inherits `onCustomEvent` from `ChatClientOptions` (it's not omitted), making it available for users to pass in. However, `use-chat.ts` does not wrap it when constructing the `ChatClient`, unlike React and Vue which both forward it. This causes user-provided `onCustomEvent` handlers to be silently ignored in Preact. Add the following alongside the existing callback wrappers (after `onError`):onCustomEvent: (eventType, data, context) =>
optionsRef.current.onCustomEvent?.(eventType, data, context),<details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@packages/typescript/ai-preact/src/use-chat.tsaround lines 63 - 74, The
Preact use-chat implementation is missing a wrapped forwarder for onCustomEvent
so handlers in UseChatOptions are ignored; update the ChatClient options
construction in use-chat.ts to add an onCustomEvent wrapper (similar to
onResponse/onChunk/onFinish/onError) that calls
optionsRef.current.onCustomEvent?.(eventType, data, context), placing it after
the existing onError wrapper so Preact forwards user-provided onCustomEvent
handlers like the React/Vue implementations.</details> </blockquote></details> </blockquote></details>🧹 Nitpick comments (6)
packages/typescript/ai-isolate-node/tests/escape-attempts.test.ts (1)
122-127: Optional: simplify the acceptance check to match the QuickJS suite.The ternary-into-
toContainEqualpattern is harder to read than the parallel assertion inai-isolate-quickjs/tests/escape-attempts.test.ts(lines 89-92). Consider the same direct predicate for consistency:♻️ Proposed refactor
- expect(res.success).toBe(true) - expect(['undefined', 'blocked:']).toContainEqual( - typeof res.value === 'string' && res.value.startsWith('blocked:') - ? 'blocked:' - : res.value, - ) + expect(res.success).toBe(true) + expect( + res.value === 'undefined' || + (typeof res.value === 'string' && res.value.startsWith('blocked:')), + ).toBe(true)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-isolate-node/tests/escape-attempts.test.ts` around lines 122 - 127, The current assertion in escape-attempts.test.ts uses a ternary inside toContainEqual which is hard to read; replace it with a direct predicate like the QuickJS test: assert that either typeof res.value === 'undefined' or (typeof res.value === 'string' && res.value.startsWith('blocked:')) is true. Update the second expect (the one referencing res and res.value) to use that explicit boolean predicate so the intent matches the ai-isolate-quickjs escape-attempts.test.ts pattern.packages/typescript/ai-react/tests/use-chat.test.ts (1)
803-827: Good regression test; optional: also pinonResponse/onCustomEvent.The negative assertion on
firstis exactly what the bug needed. Since the same wrapping is applied toonResponseandonCustomEvent, a couple of parameterized cases (or adescribe.each) would prevent a future regression on those two without much additional code. Purely a nice-to-have.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-react/tests/use-chat.test.ts` around lines 803 - 827, Add parameterized tests mirroring the existing "should use the latest onChunk after the parent rerenders with a new callback" case for onResponse and onCustomEvent: create a describe.each (or loop) over the callback names ['onChunk','onResponse','onCustomEvent'] and for each, renderHook using useChat with an initial mock callback (vi.fn()), rerender with a new mock callback, call result.current.sendMessage('Test') using the same createMockConnectionAdapter/createTextChunks setup, then assert that only the newer callback was called and the original was not; reuse utilities createMockConnectionAdapter, createTextChunks, renderHook, rerender and the test's adapter/sendMessage flow so behavior is identical across the three callbacks.packages/typescript/ai-openai/src/adapters/text.ts (1)
624-643: Residualconsole.login the inner stream catch still leaks error details.The PR's stated hygiene goal (per
.changeset/error-narrowing.md) is to stop logging caught error objects because SDK errors can carry auth headers / request state. The outertry/catchwas removed at Line 114, but this inner catch at Lines 624–632 still emits aconsole.logwitherr.message. Whileerr.messagealone is typically safe, it is inconsistent with the changeset's "library no longer logs" framing and can still leak provider-side error strings to stdout in production.Consider removing the
console.logentirely (theRUN_ERRORyield at Lines 633-642 already surfaces the information to the caller):🧹 Proposed cleanup
- } catch (error: unknown) { - const err = error as Error & { code?: string } - console.log( - '[OpenAI Adapter] Stream ended with error. Event type summary:', - { - totalChunks: chunkCount, - error: err.message, - }, - ) - yield { + } catch (error: unknown) { + const err = error as Error & { code?: string } + yield { type: 'RUN_ERROR', runId, model: options.model, timestamp, error: { message: err.message || 'Unknown error occurred', code: err.code, }, } }Or, for full consistency with the rest of the PR, route through
toRunErrorPayload(would require exporting it from@tanstack/aiand importing here).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/adapters/text.ts` around lines 624 - 643, Remove the residual console.log in the inner stream catch inside packages/typescript/ai-openai/src/adapters/text.ts (the catch handling error: unknown that casts to err and references chunkCount) so the adapter no longer prints error details to stdout; instead only yield the RUN_ERROR payload already produced (or replace the yielded object with a call to toRunErrorPayload if you export/import that helper from `@tanstack/ai`) while preserving runId, options.model and timestamp in the yielded error payload.packages/typescript/ai-code-mode-skills/tests/file-storage.test.ts (1)
1-1: HoistreadFileinto the top-level import.
readFileis dynamically imported mid-test whilenode:fs/promisesis already imported on line 1. Moving it to the static import keeps module loading consistent and removes the unnecessaryawait import(...).Proposed refactor
-import { mkdtemp, rm } from 'node:fs/promises' +import { mkdtemp, readFile, rm } from 'node:fs/promises' @@ - const { readFile } = await import('node:fs/promises') const meta = JSON.parse( await readFile(join(dir, 'x', 'meta.json'), 'utf-8'), )Also applies to: 69-73
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-code-mode-skills/tests/file-storage.test.ts` at line 1, The test dynamically imports readFile mid-test; hoist readFile into the static import from 'node:fs/promises' alongside mkdtemp and rm (i.e., import { mkdtemp, rm, readFile } from 'node:fs/promises') and remove the await import(...) usage. Update any places that call the dynamically-imported readFile to use the top-level readFile symbol instead (also remove the duplicated dynamic import occurrences around the other test block that mirror lines 69-73).packages/typescript/ai-ollama/src/adapters/text.ts (1)
381-385: Redundant wrapper method.
convertToolsToOllamaFormatnow simply forwards toconvertToolsToProviderFormat. You could inline the call at line 468 and drop the method to reduce indirection, matching peer adapters. Not blocking.Proposed refactor
- private convertToolsToOllamaFormat( - tools?: Array<Tool>, - ): Array<OllamaTool> | undefined { - return convertToolsToProviderFormat(tools) - } @@ - tools: this.convertToolsToOllamaFormat(options.tools), + tools: convertToolsToProviderFormat(options.tools),With this, the
OllamaTooltype import may become unused.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-ollama/src/adapters/text.ts` around lines 381 - 385, The method convertToolsToOllamaFormat is a redundant wrapper that only calls convertToolsToProviderFormat; remove convertToolsToOllamaFormat and replace its call sites with direct calls to convertToolsToProviderFormat (e.g., the usage in the adapter where convertToolsToOllamaFormat is invoked), then delete the now-unused convertToolsToOllamaFormat declaration and clean up any resulting unused imports such as the OllamaTool type.packages/typescript/ai-ollama/tests/text-adapter.test.ts (1)
265-303: Nit:as anycasts onstructuredOutputinputs.The three
as anycasts side-step the real parameter type. IfstructuredOutputhas a precise input type, importing and constructing it (or usingsatisfieswith the actual type) would protect these tests from future signature drift. Not blocking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-ollama/tests/text-adapter.test.ts` around lines 265 - 303, Tests currently cast structuredOutput inputs with `as any`; replace those casts by using the real input type for `structuredOutput` (or use `satisfies`) so the test objects are type-checked. Import the actual input/interface type that `structuredOutput` expects (or the exported type tied to `createOllamaChat`) and annotate the test payloads accordingly, or write the literal payloads using `satisfies <RealInputType>`; update the three occurrences around calls to adapter.structuredOutput in the test to remove `as any` and ensure the test objects conform to the real signature.🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed. Inline comments: In @.changeset/useChat-callback-propagation.md: - Around line 8-10: The changeset mentions onCustomEvent for all packages but packages/typescript/ai-preact/src/use-chat.ts never forwards options.onCustomEvent to ChatClient; update the Preact hook (useChat in that file) to accept and forward onCustomEvent the same way React/Vue/Solid do (ensure the options object passed into new ChatClient reads optionsRef.current.onCustomEvent or directly forwards options.onCustomEvent), or alternatively remove onCustomEvent from the ai-preact portion of the changeset so the release note matches shipped behavior. In `@packages/typescript/ai-code-mode-skills/tests/file-storage.test.ts`: - Around line 78-87: The test "preserves createdAt when updating an existing skill" is flaky because it relies on a 5ms real sleep; replace the non-deterministic delay by using Vitest fake timers: call vi.useFakeTimers(), capture a base time with Date.now(), use vi.setSystemTime(base + N) (or vi.advanceTimersToNextTimer / vi.runAllTimers if you prefer) between the two storage.save calls so updatedAt will deterministically differ, then restore timers with vi.useRealTimers() (or vi.restoreAllMocks()) at the end; update the test surrounding storage.save and makeSkillInput to use these vi methods so the expect(second.updatedAt).not.toBe(first.updatedAt) is reliable. In `@packages/typescript/ai-isolate-cloudflare/src/worker/wrap-code.ts`: - Around line 12-20: The regex VALID_TOOL_NAME currently allows JS reserved words so assertSafeToolName should also reject reserved identifiers; add a denylist of JS reserved words (e.g., "break", "case", "class", "const", "continue", "debugger", "default", "delete", "do", "else", "export", "extends", "finally", "for", "function", "if", "import", "in", "instanceof", "let", "new", "return", "super", "switch", "this", "throw", "try", "typeof", "var", "void", "while", "with", "yield", "await", plus future/reserved words) and update assertSafeToolName to check both VALID_TOOL_NAME.test(name) and that name is not in the denylist; if it is reserved, throw the same style Error with a message like "Invalid tool name 'X': reserved JavaScript keyword" so generation fails fast (references: VALID_TOOL_NAME, assertSafeToolName). In `@packages/typescript/ai-ollama/tests/text-adapter.test.ts`: - Around line 74-78: The test for OLLAMA_HOST is weak because it only checks adapter.model; update the test for ollamaText to assert that the env-derived host is actually used by either (a) asserting a host property on the returned adapter (if adapter exposes one) or (b) spying on the mocked Ollama constructor (from vi.mock) to verify it was called with { host: 'http://from-env:11434' } when OLLAMA_HOST is stubbed; alternatively invoke adapter.chatStream (or another method that instantiates Ollama) and assert the mocked Ollama constructor received the env host argument to ensure ollamaText reads OLLAMA_HOST correctly. In `@packages/typescript/ai-ollama/tests/utils.test.ts`: - Around line 50-53: The test title and setup are inconsistent: the spec says "when OLLAMA_HOST is unset" but the test stubs OLLAMA_HOST to an empty string; update the test to match intent by either (A) stubbing the env as truly unset (e.g., remove or set undefined via vi.stubEnv('OLLAMA_HOST', undefined as any) or delete from process.env) and keep the description, or (B) change the description to "when OLLAMA_HOST is empty" to reflect the current vi.stubEnv('OLLAMA_HOST', '') setup; locate the assertion around getOllamaHostFromEnv() in tests/utils.test.ts to apply one of these fixes. In `@packages/typescript/ai-ollama/tsconfig.json`: - Around line 6-7: The tsconfig "include" currently lists "tests/**/*.ts" which doesn't emit under Vite's srcDir and inherits "noEmit": true; remove "tests/**/*.ts" from the include and adopt the project convention by placing test files alongside sources as "src/**/*.test.ts" (or change include to "src/**/*.{ts,tsx}" or "src/**/*.test.ts" as needed), ensure you do not rely on tsconfig to emit tests due to inherited "noEmit", and if publishing tests is required update package.json "files" to include the new test locations. In `@packages/typescript/ai/src/activities/error-payload.ts`: - Around line 13-18: When handling the Error branch, don't cast the error.code directly to string; instead read (error as Error & { code?: unknown }).code into a local value and only assign it to the returned `code` field if typeof that value === 'string' (otherwise set undefined), mirroring the plain-object branch's runtime check — update the code in the Error branch where `(error as Error & { code?: unknown }).code as string | undefined` is used so the returned object uses a validated string or undefined. --- Outside diff comments: In `@packages/typescript/ai-isolate-cloudflare/README.md`: - Around line 73-86: The headings are out of sequence: rename the markdown headings so the steps read 1→2→3; specifically change "### Option 3: Production deployment" to "### Option 2: Production deployment" or swap with "### Option 2: Wrangler remote dev" so that "### Option 2: Wrangler remote dev" becomes "### Option 3: Wrangler remote dev", ensuring the sequence is 1, 2, 3; update only the two heading lines ("### Option 3: Production deployment" and "### Option 2: Wrangler remote dev") in the README.md to reflect the correct numeric order. In `@packages/typescript/ai-preact/src/use-chat.ts`: - Around line 63-74: The Preact use-chat implementation is missing a wrapped forwarder for onCustomEvent so handlers in UseChatOptions are ignored; update the ChatClient options construction in use-chat.ts to add an onCustomEvent wrapper (similar to onResponse/onChunk/onFinish/onError) that calls optionsRef.current.onCustomEvent?.(eventType, data, context), placing it after the existing onError wrapper so Preact forwards user-provided onCustomEvent handlers like the React/Vue implementations. --- Nitpick comments: In `@packages/typescript/ai-code-mode-skills/tests/file-storage.test.ts`: - Line 1: The test dynamically imports readFile mid-test; hoist readFile into the static import from 'node:fs/promises' alongside mkdtemp and rm (i.e., import { mkdtemp, rm, readFile } from 'node:fs/promises') and remove the await import(...) usage. Update any places that call the dynamically-imported readFile to use the top-level readFile symbol instead (also remove the duplicated dynamic import occurrences around the other test block that mirror lines 69-73). In `@packages/typescript/ai-isolate-node/tests/escape-attempts.test.ts`: - Around line 122-127: The current assertion in escape-attempts.test.ts uses a ternary inside toContainEqual which is hard to read; replace it with a direct predicate like the QuickJS test: assert that either typeof res.value === 'undefined' or (typeof res.value === 'string' && res.value.startsWith('blocked:')) is true. Update the second expect (the one referencing res and res.value) to use that explicit boolean predicate so the intent matches the ai-isolate-quickjs escape-attempts.test.ts pattern. In `@packages/typescript/ai-ollama/src/adapters/text.ts`: - Around line 381-385: The method convertToolsToOllamaFormat is a redundant wrapper that only calls convertToolsToProviderFormat; remove convertToolsToOllamaFormat and replace its call sites with direct calls to convertToolsToProviderFormat (e.g., the usage in the adapter where convertToolsToOllamaFormat is invoked), then delete the now-unused convertToolsToOllamaFormat declaration and clean up any resulting unused imports such as the OllamaTool type. In `@packages/typescript/ai-ollama/tests/text-adapter.test.ts`: - Around line 265-303: Tests currently cast structuredOutput inputs with `as any`; replace those casts by using the real input type for `structuredOutput` (or use `satisfies`) so the test objects are type-checked. Import the actual input/interface type that `structuredOutput` expects (or the exported type tied to `createOllamaChat`) and annotate the test payloads accordingly, or write the literal payloads using `satisfies <RealInputType>`; update the three occurrences around calls to adapter.structuredOutput in the test to remove `as any` and ensure the test objects conform to the real signature. In `@packages/typescript/ai-openai/src/adapters/text.ts`: - Around line 624-643: Remove the residual console.log in the inner stream catch inside packages/typescript/ai-openai/src/adapters/text.ts (the catch handling error: unknown that casts to err and references chunkCount) so the adapter no longer prints error details to stdout; instead only yield the RUN_ERROR payload already produced (or replace the yielded object with a call to toRunErrorPayload if you export/import that helper from `@tanstack/ai`) while preserving runId, options.model and timestamp in the yielded error payload. In `@packages/typescript/ai-react/tests/use-chat.test.ts`: - Around line 803-827: Add parameterized tests mirroring the existing "should use the latest onChunk after the parent rerenders with a new callback" case for onResponse and onCustomEvent: create a describe.each (or loop) over the callback names ['onChunk','onResponse','onCustomEvent'] and for each, renderHook using useChat with an initial mock callback (vi.fn()), rerender with a new mock callback, call result.current.sendMessage('Test') using the same createMockConnectionAdapter/createTextChunks setup, then assert that only the newer callback was called and the original was not; reuse utilities createMockConnectionAdapter, createTextChunks, renderHook, rerender and the test's adapter/sendMessage flow so behavior is identical across the three callbacks.🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID:
4ab4f3eb-3d5b-47a7-96b4-b8118dba9101📒 Files selected for processing (43)
.changeset/error-narrowing.md.changeset/isolate-cloudflare-production.md.changeset/ollama-tool-converter.md.changeset/useChat-callback-propagation.mdpackages/typescript/ai-code-mode-skills/tests/create-skill-management-tools.test.tspackages/typescript/ai-code-mode-skills/tests/create-skills-system-prompt.test.tspackages/typescript/ai-code-mode-skills/tests/file-storage.test.tspackages/typescript/ai-code-mode-skills/tests/generate-skill-types.test.tspackages/typescript/ai-code-mode-skills/tests/memory-storage.test.tspackages/typescript/ai-code-mode-skills/tests/select-relevant-skills.test.tspackages/typescript/ai-code-mode-skills/tests/skills-to-bindings.test.tspackages/typescript/ai-code-mode-skills/tests/skills-to-tools.test.tspackages/typescript/ai-code-mode-skills/tests/trust-strategies.test.tspackages/typescript/ai-code-mode-skills/tsconfig.jsonpackages/typescript/ai-isolate-cloudflare/README.mdpackages/typescript/ai-isolate-cloudflare/src/worker/index.tspackages/typescript/ai-isolate-cloudflare/src/worker/wrap-code.tspackages/typescript/ai-isolate-cloudflare/tests/escape-attempts.test.tspackages/typescript/ai-isolate-cloudflare/tests/worker.test.tspackages/typescript/ai-isolate-cloudflare/wrangler.tomlpackages/typescript/ai-isolate-node/tests/escape-attempts.test.tspackages/typescript/ai-isolate-quickjs/tests/escape-attempts.test.tspackages/typescript/ai-ollama/src/adapters/text.tspackages/typescript/ai-ollama/src/index.tspackages/typescript/ai-ollama/src/tools/function-tool.tspackages/typescript/ai-ollama/src/tools/index.tspackages/typescript/ai-ollama/src/tools/tool-converter.tspackages/typescript/ai-ollama/tests/text-adapter.test.tspackages/typescript/ai-ollama/tests/tool-converter.test.tspackages/typescript/ai-ollama/tests/utils.test.tspackages/typescript/ai-ollama/tsconfig.jsonpackages/typescript/ai-openai/src/adapters/text.tspackages/typescript/ai-preact/src/use-chat.tspackages/typescript/ai-react/src/use-chat.tspackages/typescript/ai-react/tests/use-chat.test.tspackages/typescript/ai-solid/src/use-chat.tspackages/typescript/ai-svelte/src/create-chat.svelte.tspackages/typescript/ai-vue/src/use-chat.tspackages/typescript/ai/src/activities/error-payload.tspackages/typescript/ai/src/activities/generateVideo/index.tspackages/typescript/ai/src/activities/stream-generation-result.tspackages/typescript/ai/src/stream-to-response.tspackages/typescript/ai/tests/error-payload.test.ts
| "include": ["src/**/*.ts", "src/**/*.tsx", "tests/**/*.ts"], | ||
| "exclude": ["node_modules", "dist", "**/*.config.ts"] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether the package's build currently emits tests/ into dist and whether package.json "files"/exports would ship them.
fd -t f 'package.json' packages/typescript/ai-ollama --max-depth 2 --exec cat {}
echo '---'
cat tsconfig.json 2>/dev/null || true
echo '---'
# Check other ai-* packages for how they handle tests in tsconfig
fd -t f 'tsconfig.json' packages/typescript --max-depth 3 --exec sh -c 'echo "== {} =="; cat {}'Repository: TanStack/ai
Length of output: 11835
🏁 Script executed:
# Check if tests/ directory exists and what's in it
find packages/typescript/ai-ollama/tests -type f -name "*.ts" 2>/dev/null | head -20
# Check the vite config to understand build behavior
cat packages/typescript/ai-ollama/vite.config.ts 2>/dev/null || echo "vite.config.ts not found"
# Verify the actual tsconfig inheritance - check if noEmit is present
rg "noEmit" packages/typescript/ai-ollama/tsconfig.jsonRepository: TanStack/ai
Length of output: 993
🏁 Script executed:
# Check the root tsconfig to see if noEmit is inherited
cat tsconfig.json | head -50
# Check if there's a vite build and what files it outputs
# Let's look for any build scripts or check what vite would output
cd packages/typescript/ai-ollama
# Check the merged tsconfig behavior - does noEmit get inherited?
cat packages/typescript/ai-ollama/tsconfig.json
# Also verify the learning about test file placement - tests are in tests/ not alongside src
ls -la packages/typescript/ai-ollama/src/*.test.ts 2>/dev/null | wc -l
echo "---"
ls -la packages/typescript/ai-ollama/tests/*.test.ts 2>/dev/null | wc -lRepository: TanStack/ai
Length of output: 1016
🏁 Script executed:
# Get the full ai-ollama tsconfig again to double-check the inheritance
cat packages/typescript/ai-ollama/tsconfig.json
# Understand the actual build behavior - check what tanstackViteConfig does
# and whether it respects the srcDir setting
rg -A 10 "tanstackViteConfig" packages/typescript/ai-ollama/vite.config.tsRepository: TanStack/ai
Length of output: 650
🏁 Script executed:
# Check how the vite build actually works - does it respect srcDir?
# Look for the actual vite config implementation or documentation
rg -A 20 "export.*tanstackViteConfig" packages --type ts 2>/dev/null | head -50
# Also check if there are actual test files ending up in dist after build
# First, let's see what the build process actually produces
# Check if there's a build output or CI that shows what gets built
git log --oneline -n 1 packages/typescript/ai-ollama/
# Check one more time - does tsconfig actually control what vite outputs or is it just for type checking?
# The key question: does vite build output tests to dist/tests?
find packages/typescript/ai-ollama -name "vite.config.ts" -o -name "build.ts" | xargs cat 2>/dev/nullRepository: TanStack/ai
Length of output: 861
Test files are not emitted to dist/ due to Vite's srcDir: './src' configuration and inherited noEmit: true; however, test placement is inconsistent with project conventions.
The noEmit: true setting from the root tsconfig is inherited and preserved even with outDir: "dist" specified locally. More importantly, Vite (the actual build tool) is explicitly configured with srcDir: './src', which limits compilation to only the src directory. The tests/**/*.ts inclusion in tsconfig only affects type-checking, not build output. Additionally, the package.json "files" field restricts npm publish to only ["dist", "src"], preventing any test files from being published even if they were emitted.
That said, tests should follow the project convention of being placed in src/**/*.test.ts files alongside source code, not in a separate tests/ directory. Consider moving tests into src for consistency with the project's stated practices.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-ollama/tsconfig.json` around lines 6 - 7, The tsconfig
"include" currently lists "tests/**/*.ts" which doesn't emit under Vite's srcDir
and inherits "noEmit": true; remove "tests/**/*.ts" from the include and adopt
the project convention by placing test files alongside sources as
"src/**/*.test.ts" (or change include to "src/**/*.{ts,tsx}" or
"src/**/*.test.ts" as needed), ensure you do not rely on tsconfig to emit tests
due to inherited "noEmit", and if publishing tests is required update
package.json "files" to include the new test locations.
- ai-preact: forward onCustomEvent in useChat (changeset claimed the fix covered preact but it was silently dropped before reaching ChatClient). - ai-isolate-cloudflare: reject JS reserved keywords as tool names (return, class, function, if, await, ...) so the wrapper fails fast at generation time instead of with a cryptic SyntaxError at eval. - ai/src/activities/error-payload: apply typeof string check to the Error branch's code field, matching the plain-object branch. Some SDKs attach numeric or Symbol codes to Error instances. - ai-ollama text-adapter test: strengthen OLLAMA_HOST assertion by tracking the mocked Ollama constructor args, so the test fails if the env var is ignored. - ai-ollama utils test: rename 'when OLLAMA_HOST is unset' to 'empty' since the setup stubs an empty string. - ai-code-mode-skills file-storage test: use vi.useFakeTimers() for the createdAt/updatedAt round-trip instead of a 5ms real sleep.
Summary
Six-commit remediation pass from a codebase audit. Scope is deliberately defensive — no user-facing API changes, no new features.
Commits
test(ai-code-mode-skills)— 116 new tests across 9 files for the previously-untested skill-execution package (trust strategies, storage, management tools, bindings, isolate-driven execution, type generation, system prompt, skill selection).feat(ai-isolate-cloudflare)— Worker code, wrangler.toml, and README updated to reflect thatunsafe_evalis production-capable (previously documented as dev-only). AddedassertSafeToolNameso a malicious tool name cannot break out of the generated wrapper function. 6 new escape-attempt tests.refactor(ai-ollama)— Extracted inline tool conversion intosrc/tools/matching peer adapters (openai/anthropic/grok/groq). 29 new tests (converter + client utils + streaming/tool-call/structured-output paths). The package previously had 73 source files and zero unit tests.fix(frameworks)—onResponse/onChunk/onCustomEventwere captured by reference atChatClientcreation and never updated when parents re-rendered. Fixed in React, Preact, Vue, Solid. Svelte documents the capture-at-creation lifecycle (itscreateChatruns once and there's no per-render hook). Includes a React regression test.refactor(ai, ai-openai)— Narrowedcatch (error: any)instream-to-response.ts,activities/stream-generation-result.ts,activities/generateVideo/index.tsthrough a new sharedtoRunErrorPayloadhelper. Removed 4console.errorcalls in the OpenAI text adapter that dumped the full SDK error object (potential auth-header leak in logs). 8 new tests for the helper.test(isolates)— Escape-attempt coverage for the Node and QuickJS drivers:process/require/fetchunavailable, prototype pollution doesn't leak to host or between contexts, CPU-spin loops hit the timeout, Function-constructor attempts stay inside the isolate. QuickJS additionally verifies per-contextglobalThisisolation.Test plan
pnpm --filter @tanstack/ai-code-mode-skills test:lib— 116 testspnpm --filter @tanstack/ai-isolate-cloudflare test:lib— 35 testspnpm --filter @tanstack/ai-isolate-node test:lib— 24 tests (8 new)pnpm --filter @tanstack/ai-isolate-quickjs test:lib— 28 tests (8 new)pnpm --filter @tanstack/ai-ollama test:lib— 29 testspnpm --filter @tanstack/ai test:lib— 647 tests (8 new)pnpm --filter @tanstack/ai-openai test:lib— 127 testspnpm --filter @tanstack/ai-react test:lib— 98 tests (1 new regression)pnpm --filter @tanstack/ai-vue test:lib— 86 testspnpm --filter @tanstack/ai-solid test:lib— 94 testspnpm --filter @tanstack/ai-preact test:lib— 52 testspnpm --filter @tanstack/ai-svelte test:lib— 53 teststest:typesandtest:eslintclean for all touched packages.github/workflows/e2e.yml)Changesets
Four patch-level changesets added covering the affected packages.
Notes
wrangler dev/ production.client.updateOptionsin@tanstack/ai-clientskipsundefinedvalues via a!== undefinedguard — the new Vue/Solid wrapper pattern side-steps that (a user-providedundefinedcleanly no-ops through?.). If we later want imperative clearing, that guard is worth revisiting.Summary by CodeRabbit
New Features
Bug Fixes
useChatcallback propagation so streaming callbacks always use the latest handlers.Security
Improvements
unsafe_evalguidance.Tests