feat(ai): debug logging system across all activities and adapters#467
feat(ai): debug logging system across all activities and adapters#467AlemTuzlak wants to merge 61 commits intomainfrom
Conversation
…r-internals subpath
…ger in text adapter
…gger in text adapter
…ns and normalize provider log key
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdded a package-wide debug logging system: logger types, Console/Internal loggers, a debug-option resolver, threading of an InternalLogger through activities/engines/middleware/adapters, per-category emits (request/provider/output/middleware/tools/agentLoop/config/errors), docs, tests, and E2E support. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Activity
participant Resolver as DebugResolver
participant Logger as InternalLogger
participant Engine
participant Middleware
participant Adapter
participant Provider
User->>Activity: call chat(..., debug)
Activity->>Resolver: resolveDebugOption(debug)
Resolver-->>Logger: InternalLogger
Activity->>Engine: instantiate TextEngine(..., logger)
Engine->>Logger: request("chat start")
Engine->>Middleware: runOnStart(...)
Middleware->>Logger: middleware/config(...)
Engine->>Adapter: chatStream({ logger, ... })
Adapter->>Logger: request("provider/model")
Adapter->>Provider: open stream
Provider-->>Adapter: stream chunk
Adapter->>Logger: provider("chunk", {chunk})
Adapter-->>Engine: yield chunk
Engine->>Logger: output("emit chunk")
Engine->>Middleware: runOnChunk(...)
Middleware->>Logger: middleware("onChunk result")
sequenceDiagram
participant User
participant Activity
participant Resolver as DebugResolver
participant Logger as InternalLogger
participant Adapter
participant Provider
User->>Activity: generateImage(..., debug)
Activity->>Resolver: resolveDebugOption(debug)
Resolver-->>Logger: InternalLogger
Activity->>Logger: request("generateImage")
Activity->>Adapter: generateImages({ logger, ... })
Adapter->>Logger: request("provider/model")
Adapter->>Provider: images.generate()
alt Success
Provider-->>Adapter: images
Adapter-->>Activity: images
Activity->>Logger: output("generated N images")
else Error
Provider-->>Adapter: error
Adapter->>Logger: errors("provider.generateImage fatal", {error})
Adapter-->>Activity: throw
Activity->>Logger: errors("generateImage activity failed")
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning Review ran into problems🔥 ProblemsTimed out fetching pipeline failures after 30000ms Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🚀 Changeset Version Preview10 package(s) bumped directly, 23 bumped as dependents. 🟥 Major bumps
🟨 Minor bumps
🟩 Patch bumps
|
|
View your CI Pipeline Execution ↗ for commit 9463efa
☁️ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-code-mode
@tanstack/ai-code-mode-skills
@tanstack/ai-devtools-core
@tanstack/ai-elevenlabs
@tanstack/ai-event-client
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-groq
@tanstack/ai-isolate-cloudflare
@tanstack/ai-isolate-node
@tanstack/ai-isolate-quickjs
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/typescript/ai-openai/src/adapters/tts.ts (1)
53-73:⚠️ Potential issue | 🟠 MajorMove validators inside
tryblock for consistent error logging.The validators should be wrapped in the
tryblock to ensure validation errors are logged vialogger.errors(), matching the pattern in theai-grokimage adapter in this PR. Currently, validators execute outside thetryblock, bypassing consistent error logging.The concern about a missing
loggeris invalid—TTSOptionsalready requireslogger: InternalLogger(no optional field), and the activity wrapper ensures it is always provided.🛠️ Proposed fix
const { logger } = options const { model, text, voice, format, speed, modelOptions } = options logger.request(`activity=generateSpeech provider=openai model=${model}`, { provider: 'openai', model, }) - // Validate inputs using existing validators - const audioOptions = { - input: text, - model, - voice: voice as OpenAITTSVoice, - speed, - response_format: format as OpenAITTSFormat, - ...modelOptions, - } - - validateAudioInput(audioOptions) - validateSpeed(audioOptions) - validateInstructions(audioOptions) - - // Build request - const request: OpenAI_SDK.Audio.SpeechCreateParams = { - model, - input: text, - voice: voice || 'alloy', - response_format: format, - speed, - ...modelOptions, - } - try { + // Validate inputs using existing validators + const audioOptions = { + input: text, + model, + voice: voice as OpenAITTSVoice, + speed, + response_format: format as OpenAITTSFormat, + ...modelOptions, + } + + validateAudioInput(audioOptions) + validateSpeed(audioOptions) + validateInstructions(audioOptions) + + // Build request + const request: OpenAI_SDK.Audio.SpeechCreateParams = { + model, + input: text, + voice: voice || 'alloy', + response_format: format, + speed, + ...modelOptions, + } + const response = await this.client.audio.speech.create(request)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/adapters/tts.ts` around lines 53 - 73, Move the validation calls so they execute inside the existing try block that handles the TTS generation so validation errors are caught and forwarded to logger.errors() like the other adapters; specifically, keep construction of the audioOptions object (input/text, model, voice, speed, response_format, modelOptions) where it is, then relocate the calls to validateAudioInput(audioOptions), validateSpeed(audioOptions) and validateInstructions(audioOptions) into the try block that follows logger.request(...) so they run before the actual OpenAI call and any thrown validation errors are handled by the catch/logger.errors flow.packages/typescript/ai/src/activities/generateTranscription/index.ts (1)
173-192:⚠️ Potential issue | 🟡 MinorInconsistent provider identification between logger and event client.
providerNameis derived with aprovider ?? name ?? 'unknown'fallback forlogger.request/output/errors, but theaiEventClient.emit(...)calls on lines 180 and 200 still useadapter.namedirectly. Events and logs for the same request can disagree on the provider string, and the event-client calls will emitundefinedwhen the adapter only exposesprovider. Consider usingproviderNameconsistently, or drop the fallback ifadapter.nameis always defined (in which case the cast trickery on 173–176 is unnecessary).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/src/activities/generateTranscription/index.ts` around lines 173 - 192, The event emissions use adapter.name directly while the logger uses providerName (which falls back to provider/name/'unknown'); update the aiEventClient.emit calls (the provider field in both transcription:request:started and transcription:request:finished/error emits) to use the computed providerName instead of adapter.name so events and logs are consistent, and ensure any other uses of adapter.name in this activity are replaced with providerName (or else remove the provider/name fallback and related casts if you decide adapter.name is always present).
🧹 Nitpick comments (16)
packages/typescript/ai-openrouter/src/adapters/image.ts (1)
68-79: Logger field is required by type system; defensive chaining is optional.The
loggerfield inImageGenerationOptionsis a required, non-optional property. The activity entry pointgenerateImage/index.tsalways creates and injects the logger before calling the adapter. IfgenerateImagesis invoked directly, TypeScript's type system will enforce that a validInternalLoggeris provided.Adding optional chaining (
logger?.request()) is defensible as a hardening measure against type contract violations, but it is not necessary to prevent runtime errors under normal usage. Consider it only if the adapter needs to remain callable in untyped/raw contexts; otherwise, the current implementation is safe.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openrouter/src/adapters/image.ts` around lines 68 - 79, The logger on ImageGenerationOptions is required so do not add optional chaining; ensure the adapter continues to call logger.request(...) directly (keep the current logger.request call), confirm ImageGenerationOptions declares logger as a non-optional InternalLogger, and avoid changing generateImages or generateImage/index.ts to accept a nullable logger—only add defensive ?.request() if you intentionally want to support untyped/raw callers.packages/typescript/ai/src/activities/chat/adapter.ts (1)
20-27: Doc placement: per-chunk guidance doesn't apply to structured output.This JSDoc lives on
StructuredOutputOptions, which is used by the non-streamingstructuredOutputmethod. The instruction to calllogger.provider()"for each chunk received" fitschatStreambut not a single-response call — there are no chunks here. Consider moving the sequence guidance toTextAdapter.chatStream/BaseTextAdapter.chatStreamJSDoc, and leaving this block with only request/errors guidance (orlogger.provider()for the single response if that's the intent).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/src/activities/chat/adapter.ts` around lines 20 - 27, The JSDoc on StructuredOutputOptions incorrectly instructs per-chunk logging (logger.provider() "for each chunk received") even though StructuredOutputOptions is used by non-streaming structuredOutput; update the docs to only require logger.request() and logger.errors() (or note logger.provider() may be used for the single response if intended), and move the per-chunk guidance (call logger.provider() for each chunk) into the streaming-specific docs for TextAdapter.chatStream / BaseTextAdapter.chatStream so chunked logging guidance applies only to chatStream implementations.packages/typescript/ai-openai/src/adapters/video.ts (1)
112-125: Error logging catches validation errors too.
validateVideoSizeandvalidateVideoSecondswere moved inside thetryblock, so their thrown errors now get logged asopenai.createVideoJob fatalthroughlogger.errors. If that's intentional (validation failures are user-visible errors worth surfacing), fine — but consider whether client-side validation throws should be tagged as "fatal" adapter errors vs. just rethrown. A small distinction (e.g., only logging after the SDK call) would avoid noisy logs on obvious input-shape mistakes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/adapters/video.ts` around lines 112 - 125, The validation errors from validateVideoSize and validateVideoSeconds are being caught and logged as fatal by logger.errors inside the try/catch around the OpenAI SDK call; move these validation calls outside the try block (or at least before the SDK invocation) so client-side validation errors are thrown before the try/catch, or alternatively only call logger.errors after the SDK call fails (e.g., check the error origin and skip logging for validation errors). Update the createVideoJob flow to call validateVideoSize and validateVideoSeconds prior to entering the try that calls the OpenAI/Sora SDK and ensure logger.errors('openai.createVideoJob fatal', ...) is only used for SDK/runtime errors, not validation exceptions.packages/typescript/ai-openai/src/adapters/image.ts (1)
73-94: Validation errors are now logged as fatal.
validatePrompt/validateImageSize/validateNumberOfImagessit inside thetryblock, so user-input validation failures flow throughlogger.errors('openai.generateImage fatal', ...). That conflates caller-input errors with actual provider/transport failures in telemetry.Minor — feel free to ignore if intentional, or move the validations above the
tryto scope fatal logging to the actual SDK call.Proposed tweak
- try { - // Validate inputs - validatePrompt({ prompt, model }) - validateImageSize(model, size) - validateNumberOfImages(model, numberOfImages) - - // Build request based on model type - const request = this.buildRequest(options) + // Validate inputs (caller errors, not fatal) + validatePrompt({ prompt, model }) + validateImageSize(model, size) + validateNumberOfImages(model, numberOfImages) + + const request = this.buildRequest(options) + try { const response = await this.client.images.generate({ ...request, stream: false, }) - return this.transformResponse(model, response) } catch (error) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/adapters/image.ts` around lines 73 - 94, The validation helpers validatePrompt, validateImageSize, and validateNumberOfImages are inside the try block, causing user-input validation failures to be logged as fatal in logger.errors; move these three calls so they execute before the try (leave buildRequest, this.buildRequest(options) and the SDK call this.client.images.generate({...}) inside the try), or narrow the catch to only wrap the provider call so that logger.errors('openai.generateImage fatal', ...) only records actual provider/transport errors and not caller validation errors.packages/typescript/ai-gemini/src/adapters/image.ts (1)
94-120: Same validation-as-fatal note as OpenAI image adapter.
validatePrompt/validateImageSize/validateNumberOfImagesrun inside thetry, so caller-input errors get logged undergemini.generateImage fatal. Consider moving the synchronous validations above thetryto keep "fatal" reserved for provider failures. Non-blocking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/src/adapters/image.ts` around lines 94 - 120, Move the synchronous input validations out of the try block so caller validation errors aren't logged as provider "fatal" failures: call validatePrompt({ prompt, model }), validateImageSize(model, options.size) and validateNumberOfImages(model, options.numberOfImages) before entering the try that contains the provider calls (this.isGeminiImageModel / generateWithGeminiApi path, buildImagenConfig, client.models.generateImages and transformImagenResponse). Keep the try/catch solely around the async provider interactions and retain logger.errors('gemini.generateImage fatal', ...) for real runtime/provider errors.packages/typescript/ai-openrouter/src/adapters/text.ts (1)
171-175: Consider not loggingRequestAbortedErroras a fatal error.The catch unconditionally calls
logger.errors(...)before the branch on line 187 recognizesRequestAbortedErroras a normal user-initiated cancellation (mapped to aRUN_ERRORwithcode: 'aborted'). Users wiringlogger.errorto alerting will see spurious fatals whenever a stream is aborted. Worth distinguishing aborts from true failures here (and symmetrically at L261-264 forstructuredOutput, whereRequestAbortedErroris also treated as a soft abort).♻️ Proposed adjustment
} catch (error) { - logger.errors('openrouter.chatStream fatal', { - error, - source: 'openrouter.chatStream', - }) + if (!(error instanceof RequestAbortedError)) { + logger.errors('openrouter.chatStream fatal', { + error, + source: 'openrouter.chatStream', + }) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openrouter/src/adapters/text.ts` around lines 171 - 175, The catch in openrouter.chatStream currently unconditionally logs fatal errors; change it to detect RequestAbortedError (e.g., error.name === 'RequestAbortedError' or instanceof RequestAbortedError) and avoid calling logger.errors for aborts (use a softer log or no log), only call logger.errors for true failures; apply the same adjustment in the structuredOutput catch (lines handling RequestAbortedError at the structuredOutput block) so aborts are treated as soft/user cancellations and not logged as fatal.packages/typescript/ai/src/activities/generateSpeech/index.ts (1)
146-171: Inconsistent provider-name resolution vs. peer activities.Two small nits worth consolidating:
- L151-154:
TTSAdapter/BaseTTSAdapterdon't declare aproviderfield — onlyname. The fallback chain against{ provider?: string }is reaching for a property that doesn't exist on the type, and it makes the logger'sproviderlabel capable of diverging from theaiEventClient.emitevent'sprovider: adapter.name(L158, L180). Peer activities (generateImage,summarize) just useadapter.name.- L146 destructures
debug: _debug(signaling unused) but L150 then readsoptions.debug— pick one.♻️ Proposed simplification
- const { adapter, stream: _stream, debug: _debug, ...rest } = options + const { adapter, stream: _stream, debug, ...rest } = options const model = adapter.model const requestId = createId('speech') const startTime = Date.now() - const logger: InternalLogger = resolveDebugOption(options.debug) - const providerName = - (adapter as { name?: string; provider?: string }).provider ?? - (adapter as { name?: string }).name ?? - 'unknown' + const logger: InternalLogger = resolveDebugOption(debug) @@ - logger.request(`activity=generateSpeech provider=${providerName}`, { - provider: providerName, + logger.request(`activity=generateSpeech provider=${adapter.name}`, { + provider: adapter.name, model, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/src/activities/generateSpeech/index.ts` around lines 146 - 171, The code inconsistently resolves provider and mixes use of options.debug vs the destructured debug; fix by (1) simplifying provider resolution to use adapter.name only (e.g., set providerName = adapter.name ?? 'unknown') and update the aiEventClient.emit call to use that same providerName so provider is consistent across events/logs (references: providerName, adapter.name, aiEventClient.emit), and (2) make the debug handling consistent by removing the unused debug alias (_debug) in the destructuring and using the destructured debug when calling resolveDebugOption (references: the const destructure line and resolveDebugOption(options.debug) — change to resolveDebugOption(debug)).packages/typescript/ai/src/activities/generateVideo/index.ts (1)
253-307: Duplicate provider-name derivation; consider extracting a helper.The exact same
logger + providerNameboilerplate is repeated inrunCreateVideoJob(253–257) andrunStreamingVideoGeneration(303–307), and the same pattern also appears ingenerateTranscription(and presumably other activities). A small helper likeresolveLoggerAndProvider(adapter, debug)would remove the copy/paste and give a single place to evolve the fallback logic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/src/activities/generateVideo/index.ts` around lines 253 - 307, The code duplicates logger and provider-name derivation in runCreateVideoJob, runStreamingVideoGeneration, and generateTranscription; extract a small helper (e.g. resolveLoggerAndProvider(adapter, debug)) that returns { logger: InternalLogger, providerName: string } by encapsulating resolveDebugOption(options.debug) and the provider/name fallback logic ((adapter as { provider?: string }).provider ?? (adapter as { name?: string }).name ?? 'unknown'); then replace the duplicated blocks in runCreateVideoJob, runStreamingVideoGeneration, and generateTranscription to call this helper and use the returned logger and providerName for logging and metadata.packages/typescript/ai-openai/src/adapters/summarize.ts (1)
103-107: Nit:stream=trueonly present in meta, not the message string.For consistency with
text.ts(…stream=trueembedded in the request message), and so that console output alone reveals whether the call is streaming, include it in the formatted message string.Proposed tweak
- logger.request(`activity=summarize provider=openai`, { + logger.request(`activity=summarize provider=openai stream=true`, { provider: 'openai', model: options.model, stream: true, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/adapters/summarize.ts` around lines 103 - 107, The logger.request call in summarize.ts currently logs stream: true only in the metadata object but not in the formatted message string; update the logger.request invocation (the call that logs "activity=summarize provider=openai") to include "stream=true" in that message string (mirroring the pattern used in text.ts) so the console message itself indicates streaming, while leaving the metadata object (provider, model, stream) intact.packages/typescript/ai-grok/src/adapters/text.ts (1)
229-229: Per-chunk meta allocation on hot path.
logger.provider(\provider=grok`, { chunk })runs for every streamed chunk. Even thoughInternalLogger.emitearly-returns when theprovidercategory is disabled, the{ chunk }meta object is allocated on every iteration regardless. For long streams this is measurable GC pressure in the default case (whereprovider` is off).Consider gating with
logger.isEnabled('provider')on hot paths:♻️ Suggested change
- logger.provider(`provider=grok`, { chunk }) + if (logger.isEnabled('provider')) { + logger.provider(`provider=grok`, { chunk }) + }Same concern applies to the equivalent line in
ai-groq/src/adapters/text.tsat line 227.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-grok/src/adapters/text.ts` at line 229, The per-chunk logger call allocates a meta object for every streamed chunk via logger.provider(`provider=grok`, { chunk }) even when the 'provider' category is disabled; wrap the call with a guard that checks logger.isEnabled('provider') (or equivalent) before constructing the { chunk } object and calling logger.provider so the meta allocation is avoided on the hot path — update the occurrence in adapters/text.ts (symbol: logger.provider) and the analogous occurrence in ai-groq/src/adapters/text.ts to only call logger.provider when logger.isEnabled('provider') returns true.testing/e2e/src/routes/api.debug-logging.ts (1)
28-29: Untyped cast of request body; consider minimal runtime validation.
body?.debug as DebugOptionandbody?.userMessage ?? '...'trust arbitrary JSON. Since this is a test-only route, risk is low, but a malformeddebug(e.g., a string) would reach the library as-is and the failure mode would be opaque. A quick shape check keeps failures localized:const rawDebug = body?.debug const debug: DebugOption | undefined = typeof rawDebug === 'boolean' || (rawDebug && typeof rawDebug === 'object') ? (rawDebug as DebugOption) : undefined const userMessage: string = typeof body?.userMessage === 'string' ? body.userMessage : '[debug-logging] hello'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/routes/api.debug-logging.ts` around lines 28 - 29, The code currently unsafely casts body?.debug to DebugOption and trusts body?.userMessage; replace the blind casts by runtime shape checks: read rawDebug = body?.debug and set debug to undefined unless rawDebug is a boolean or a non-null object (then cast to DebugOption), and set userMessage to body?.userMessage only if typeof === 'string' otherwise fall back to '[debug-logging] hello'; update the variables DebugOption, debug, userMessage and remove the direct "as" cast so malformed JSON is handled locally.packages/typescript/ai/src/activities/chat/middleware/compose.ts (1)
146-225: Hot-path chunk logging allocates per chunk × per middleware even when disabled.The
onChunkloop firesthis.logger.middleware(...)up to twice per (middleware, chunk) pair, each time building a template-literal message and an object literal containing the chunk reference.InternalLogger.emitno-ops when themiddlewarecategory is off, but the message/meta construction happens unconditionally.For a streaming chat with N middleware × M chunks this is O(N·M) wasted allocations in the default case. Consider guarding with
isEnabled:♻️ Suggested guard for the chunk-loop logs
- for (const c of chunks) { - this.logger.middleware( - `hook=onChunk middleware=${mw.name ?? 'unnamed'} in=${c.type}`, - { middleware: mw.name ?? 'unnamed', hook: 'onChunk', in: c }, - ) + const logMw = this.logger.isEnabled('middleware') + for (const c of chunks) { + if (logMw) { + this.logger.middleware( + `hook=onChunk middleware=${mw.name ?? 'unnamed'} in=${c.type}`, + { middleware: mw.name ?? 'unnamed', hook: 'onChunk', in: c }, + ) + } const result = await mw.onChunk(ctx, c)(and similarly wrap the three post-call
logger.middlewarebranches)Minor secondary note: the pre-call log fires even for pass-through results (
result === undefined), while no post-call log fires there — the pre-log already recordsin, so this is asymmetric but harmless. Worth a short comment if you keep it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/src/activities/chat/middleware/compose.ts` around lines 146 - 225, The current onChunk loop eagerly builds template-literal messages and meta objects for this.logger.middleware on every middleware×chunk even when the middleware logging category is disabled; wrap each logger call (the pre-call log and the three post-call logs inside the result branches) with a guard like if (this.logger.middleware?.isEnabled?.()) (or the appropriate logger-isEnabled predicate) so the template strings and object literals are only constructed when logging is enabled; keep the existing shouldSkipInstrumentation/aiEventClient.emit logic unchanged but avoid allocation by checking the logger-enabled predicate before creating message/meta for this.logger.middleware calls in the onChunk handler for this.middlewares.packages/typescript/ai/tests/logger/resolve.test.ts (1)
2-2: Drop the unusedConsoleLoggerimport and itsvoidescape hatch.
ConsoleLoggerisn't referenced anywhere except thevoid ConsoleLoggerstatement at Line 106, which is a workaround for the unused-import warning. The comment claims it's "covered indirectly by the last test", but the last test only asserts thatconsole.debugis called — it doesn't verify an instance ofConsoleLogger. Removing the import is the cleanest fix; if you want an explicit check, consider exposing the underlying logger onInternalLogger(e.g., a getter) so a test can assert on it directly.🧹 Proposed cleanup
import { describe, expect, it, vi } from 'vitest' -import { ConsoleLogger } from '../../src/logger/console-logger' import { InternalLogger } from '../../src/logger/internal-logger' import { resolveDebugOption } from '../../src/logger/resolve' import type { Logger } from '../../src/logger/types'-// Keep ConsoleLogger import used: ensure the default is indeed a ConsoleLogger -// (covered indirectly by the last test). -void ConsoleLoggerAlso applies to: 104-106
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/tests/logger/resolve.test.ts` at line 2, The import ConsoleLogger in resolve.test.ts is unused and only kept with a void ConsoleLogger escape at the end; remove the import and the void escape to clean up the test. Locate the import statement for ConsoleLogger and the trailing `void ConsoleLogger` reference (around the bottom of the file) and delete both; if you want an explicit assertion instead of removing it, expose the underlying logger from InternalLogger (e.g., a getter) and add a test that asserts the concrete logger instance rather than keeping the unused import.packages/typescript/ai/src/activities/chat/index.ts (2)
1481-1492:as TextOptions<…>cast weakens the requiredloggercontract.Per
packages/typescript/ai/src/types.ts:631-728,TextOptions.loggeris now required. Theas TextOptions<Record<string, any>, Record<string, any>>casts on Line 1484 and Line 1536 will compile even if someone later forgets to includeloggerin the object literal. Since both sites already passloggerexplicitly, this works today — but a satisfied-style check (e.g., explicitly typing theparamsbinding) would surface a regression immediately rather than at runtime inside the adapter.Also applies to: 1533-1544
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/src/activities/chat/index.ts` around lines 1481 - 1492, The inline cast on the params object weakens the required logger contract; instead of using "as TextOptions<...>" when constructing TextEngine, create an explicitly typed params binding (e.g., const params: TextOptions<Record<string, any>, Record<string, any>> = { ...textOptions, model, logger }) and pass that params variable into the TextEngine constructor (and do the same for the other site around the code that builds params for TextEngine), which will ensure the compiler enforces the required logger property rather than relying on a cast.
594-599: Hot-path allocation per chunk whenoutputcategory is disabled.
this.logger.output(...)is invoked for every chunk yielded by middleware. BecauseInternalLogger.emitonly checksthis.categories[category]after entering the method, the template literal`type=${outputChunk.type}`and the{ chunk: outputChunk }meta object are built on every chunk even when theoutputcategory is off (which is the default whendebugisundefined). For long streams this is a measurable per-chunk tax on the single hottest loop in the library.Either gate the call with
this.logger.isEnabled('output')here, or — better — haveInternalLogger.output/request/provider/...short-circuit before the caller constructs the message. Wrapping callers withisEnabledis the cheapest fix; changing each category method to accept a() => [string, meta?]thunk or addingisEnabledgates at each call site are both viable.♻️ Minimal fix at this call site
- for (const outputChunk of outputChunks) { - this.logger.output(`type=${outputChunk.type}`, { chunk: outputChunk }) + for (const outputChunk of outputChunks) { + if (this.logger.isEnabled('output')) { + this.logger.output(`type=${outputChunk.type}`, { chunk: outputChunk }) + } yield outputChunk this.handleStreamChunk(outputChunk) this.middlewareCtx.chunkIndex++ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/src/activities/chat/index.ts` around lines 594 - 599, The hot-path builds the log message and meta for every output chunk even when the 'output' logger category is disabled; wrap the logger call in a fast check such as if (this.logger.isEnabled('output')) before calling this.logger.output(...) inside the loop that iterates outputChunks (the block that yields outputChunk, calls this.handleStreamChunk(outputChunk) and increments this.middlewareCtx.chunkIndex) so the template literal and meta object are only constructed when logging is enabled.packages/typescript/ai-openai/src/realtime/adapter.ts (1)
147-153: Consider distinct error messages per failure mode.All four fatal paths in this file — data-channel error (Line 148), SDP establishment failure (Line 202), missing tool-call ids (Line 310), and autoplay failure (Line 394) — log the exact same
'openai.realtime fatal'string. This makes log filtering/alerting harder since every cause collapses into one tag. Using distinct messages (e.g.,'openai.realtime datachannel error','openai.realtime sdp failed','openai.realtime tool_call missing ids') would improve observability at no runtime cost. Same concern applies to the ElevenLabs adapter.Also applies to: 197-207, 310-316
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/realtime/adapter.ts` around lines 147 - 153, The fatal log string "openai.realtime fatal" is reused across multiple failure paths, making filtering hard; update each handler to use a distinct log/error tag: change the dataChannel.onerror block (dataChannel.onerror, emit('error')), the SDP establishment failure code path (where SDP negotiation fails), the tool-call IDs check (where missing tool-call ids are detected), and the autoplay failure path to use unique messages such as "openai.realtime datachannel error", "openai.realtime sdp failed", "openai.realtime tool_call missing ids", and "openai.realtime autoplay failed" respectively, and mirror the same change in the ElevenLabs adapter so logs and emitted errors include the new, specific strings while preserving existing error objects and emit('error') behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-elevenlabs/src/realtime/types.ts`:
- Around line 28-34: The JSDoc for the debug option incorrectly mentions a
non-existent `response` category; update the comment above the `debug?:
DebugOption` declaration to list the actual DebugCategories (for example
`request`, `provider`, `output`, `errors`) or refer to
`DebugConfig`/`DebugCategories` for the canonical set so it matches the types
defined in packages/typescript/ai/src/logger/types.ts; ensure the comment text
mentions `DebugConfig` and `DebugOption` and uses the real category names
(`request`, `provider`, `output`, `middleware`, `tools`, `agentLoop`, `config`,
`errors`) or a short example subset (e.g., `request`, `provider`, `output`,
`errors`) for accuracy.
- Around line 20-21: The import for DebugOption is not at the top of the module;
move the line "import type { DebugOption } from '@tanstack/ai'" to the top of
the file before any other statements so it complies with the import/first ESLint
rule and project linting; ensure the moved import remains a type-only import and
that any references to DebugOption in this file (types.ts) still resolve.
In `@packages/typescript/ai-grok/src/adapters/image.ts`:
- Around line 59-64: The call unconditionally uses logger from
ImageGenerationOptions which can be undefined when
GrokImageAdapter.generateImages(...) is invoked directly; change the code to
ensure a fallback logger by calling resolveDebugOption(undefined) when logger is
missing (import resolveDebugOption from `@tanstack/ai/adapter-internals`) and use
that resolved logger for the logger.request call; update the destructuring/usage
in GrokImageAdapter.generateImages (and mirror the same pattern in other
provider adapters that destructure logger) so logger = logger ??
resolveDebugOption(undefined) before invoking logger.request.
In `@packages/typescript/ai-openai/src/adapters/text.ts`:
- Around line 283-285: The current logger.provider call forwards the entire raw
chunk (variable chunk) which may leak sensitive content and allocates an object
every iteration; update the code around the logger.provider call in
packages/typescript/ai-openai/src/adapters/text.ts to first check
logger.isEnabled('provider') before constructing heavy payloads, and when
logging by default pass a minimal safe meta object (e.g., { type: chunk.type,
output_index: chunk.output_index, item_id: chunk.item_id }) instead of { chunk
}, and only include the full chunk when an explicit opt-in flag is set or when
logger.isEnabled('provider_full') (or similar) returns true so you avoid
allocations and sensitive data exposure in hot paths; reference the
logger.provider call and the chunk variable when making these changes.
In `@packages/typescript/ai-openai/src/realtime/adapter.ts`:
- Around line 393-398: The audio autoplay failure is an expected, non-fatal
condition; replace the current logger.errors call in the
audioElement.play().catch(...) block with a warn-level call by adding a
warn-tier method to InternalLogger (in
packages/typescript/ai/src/logger/internal-logger.ts) and then calling
logger.warn in packages/typescript/ai-openai/src/realtime/adapter.ts where
logger.errors is used; ensure the new InternalLogger.warn maps to the
appropriate warn-level provider so this case does not trigger error-alert
pipelines.
In `@packages/typescript/ai-openai/src/realtime/types.ts`:
- Around line 67-73: Update the JSDoc on the debug?: DebugOption field to list
the actual supported categories (request, provider, output, middleware, tools,
agentLoop, config, errors) instead of the nonexistent "response" category, and
remove or correct the {`@link` DebugConfig} reference (either import the correct
symbol or change the link to the actual exported type used in this file, e.g.,
DebugOption) so the doc link resolves; ensure the JSDoc text and link match the
DebugOption/DebugConfig type names used in this module.
In `@packages/typescript/ai-openrouter/src/adapters/summarize.ts`:
- Around line 115-139: The catch in summarizeStream around yield*
this.textAdapter.chatStream(...) rarely fires because chatStream yields
RUN_ERROR chunks instead of throwing; update summarizeStream to inspect each
yielded chunk from this.textAdapter.chatStream (check for a RUN_ERROR sentinel)
and when encountered call logger.errors('openrouter.summarize fatal', { error:
chunk.error, source: 'openrouter.summarize' }) and rethrow or yield an error to
preserve parity with summarize; also update the logger.request call (before
calling chatStream) to include model=${options.model} in the request message
string so it matches the other adapters (e.g., "activity=summarize
provider=openrouter model=...") while keeping the existing meta object.
- Around line 62-107: The handler double-logs stream errors:
textAdapter.chatStream already logs RUN_ERROR events but this code rethrows
those chunks (chunk.type === 'RUN_ERROR') which is then caught and logged as
logger.errors('openrouter.summarize fatal'); change the RUN_ERROR branch in the
summarize loop to treat it as an in-band stream error (record chunk.error to
usage/log context and break/return) instead of throwing, and adjust the outer
catch to only log unexpected/transport exceptions; also update the initial
logger.request call to include the model (use model or options.model) as other
adapters do.
In `@packages/typescript/ai/src/activities/generateVideo/index.ts`:
- Around line 309-321: Swap the order so the request log is emitted before the
run-start yield in the streaming path: move the logger.request(...) call to
precede the yield { type: 'RUN_STARTED', runId, timestamp: Date.now() } in the
generateVideo streaming branch (look for the yield with type 'RUN_STARTED' and
the logger.request call that uses providerName and model) so that the "request"
log is the first observable signal to consumers.
In `@packages/typescript/ai/src/logger/internal-logger.ts`:
- Around line 20-85: InternalLogger currently only calls logger.debug or
logger.error, so warn/info are never used; update the logging surface to
actually route warn/info: change emit in InternalLogger to accept levels 'debug'
| 'info' | 'warn' | 'error' and call logger.info/logger.warn appropriately, add
convenience methods (e.g., warnings(...) and info(...)) or map specific
categories (e.g., a new 'warnings' category) to 'warn' and any semantic info
categories to 'info', and update ResolvedCategories usage so categories that
should be warn/info are enabled and routed; adjust callers (e.g., errors(),
provider(), and other category methods) to use the new levels where appropriate
so custom Logger implementations are exercised.
In `@packages/typescript/ai/src/types.ts`:
- Around line 716-722: The public option types (TextOptions,
SummarizationOptions, ImageGenerationOptions, VideoGenerationOptions,
TTSOptions, TranscriptionOptions) currently require logger: InternalLogger which
is a breaking change; change each declaration to logger?: InternalLogger and
update adapter call sites to use a fallback no-op logger (e.g., use
options.logger ?? <no-op InternalLogger>) before any SDK calls, and add or reuse
a single helper like getNoopInternalLogger()/NOOP_INTERNAL_LOGGER referenced by
the InternalLogger type to centralize the default; alternatively, if you
intentionally want a breaking change, add a major changeset documenting the
migration for adapter authors instead of making logger optional.
In `@packages/typescript/ai/tests/debug-logging-activities.test.ts`:
- Around line 212-238: Rename or split the test so it accurately covers both
branches of resolveDebugOption: keep this test (or rename it) to assert behavior
when debug is an object with all flags false (as exercised by the current call
to summarize with debug: { errors:false, provider:false, output:false,
request:false }) and add a separate test that passes debug: false (literal
boolean) to verify the boolean branch; reference the summarize call in the test
and the resolveDebugOption logic to ensure each case is covered and assertions
on logger.debug/logger.error remain appropriate for each branch.
In `@packages/typescript/ai/tests/debug-logging-chat.test.ts`:
- Around line 22-31: The async generator in createFailingMockAdapter
intentionally throws before yielding and triggers the linter rule require-yield;
either suppress the rule for that local function or replace the generator with
an iterator whose next() immediately throws. Concretely: in
createFailingMockAdapter, add an inline lint suppression (e.g., an
eslint-disable-next-line require-yield) immediately above the async function*
used for chatStreamFn, or implement chatStreamFn as an object with an async
next() that throws the Error(message) to preserve the throw-on-first-next
behavior; reference createFailingMockAdapter and the chatStreamFn generator to
locate the change.
- Around line 2-5: Reorder the import statements so value imports come before
type-only imports to satisfy ESLint import/order: move the "./test-utils" import
(collectChunks, createMockAdapter, ev) to appear before the type import of
"Logger" (from ../src/logger/types); ensure the imports for chat, StreamChunk,
and the reordered lines remain unchanged except for their order.
In `@packages/typescript/ai/tests/logger/console-logger.test.ts`:
- Around line 48-54: The test uses an inline import type in the variable
declaration which violates the consistent-type-imports rule; replace that inline
import with a top-level type import (e.g., add "import type { Logger } from
'../../src/logger/types'" at the top of the test file) and then change the
declaration to "const logger: Logger = new ConsoleLogger()" so the ConsoleLogger
and Logger references remain but the type import is a proper top-level type-only
import.
In `@testing/e2e/src/routes/api.debug-logging.ts`:
- Around line 41-50: The fallback branch currently sets resolvedDebug = {
logger: capturingLogger } which overrides the library's undefined → errors-only
default; change the fallback to leave debug undefined so the library can apply
its own defaults (e.g., set resolvedDebug = undefined or remove the else
branch), or if you intentionally want to force all categories, add a code
comment documenting that behavior; refer to the variables debug, resolvedDebug
and capturingLogger when making the change.
---
Outside diff comments:
In `@packages/typescript/ai-openai/src/adapters/tts.ts`:
- Around line 53-73: Move the validation calls so they execute inside the
existing try block that handles the TTS generation so validation errors are
caught and forwarded to logger.errors() like the other adapters; specifically,
keep construction of the audioOptions object (input/text, model, voice, speed,
response_format, modelOptions) where it is, then relocate the calls to
validateAudioInput(audioOptions), validateSpeed(audioOptions) and
validateInstructions(audioOptions) into the try block that follows
logger.request(...) so they run before the actual OpenAI call and any thrown
validation errors are handled by the catch/logger.errors flow.
In `@packages/typescript/ai/src/activities/generateTranscription/index.ts`:
- Around line 173-192: The event emissions use adapter.name directly while the
logger uses providerName (which falls back to provider/name/'unknown'); update
the aiEventClient.emit calls (the provider field in both
transcription:request:started and transcription:request:finished/error emits) to
use the computed providerName instead of adapter.name so events and logs are
consistent, and ensure any other uses of adapter.name in this activity are
replaced with providerName (or else remove the provider/name fallback and
related casts if you decide adapter.name is always present).
---
Nitpick comments:
In `@packages/typescript/ai-gemini/src/adapters/image.ts`:
- Around line 94-120: Move the synchronous input validations out of the try
block so caller validation errors aren't logged as provider "fatal" failures:
call validatePrompt({ prompt, model }), validateImageSize(model, options.size)
and validateNumberOfImages(model, options.numberOfImages) before entering the
try that contains the provider calls (this.isGeminiImageModel /
generateWithGeminiApi path, buildImagenConfig, client.models.generateImages and
transformImagenResponse). Keep the try/catch solely around the async provider
interactions and retain logger.errors('gemini.generateImage fatal', ...) for
real runtime/provider errors.
In `@packages/typescript/ai-grok/src/adapters/text.ts`:
- Line 229: The per-chunk logger call allocates a meta object for every streamed
chunk via logger.provider(`provider=grok`, { chunk }) even when the 'provider'
category is disabled; wrap the call with a guard that checks
logger.isEnabled('provider') (or equivalent) before constructing the { chunk }
object and calling logger.provider so the meta allocation is avoided on the hot
path — update the occurrence in adapters/text.ts (symbol: logger.provider) and
the analogous occurrence in ai-groq/src/adapters/text.ts to only call
logger.provider when logger.isEnabled('provider') returns true.
In `@packages/typescript/ai-openai/src/adapters/image.ts`:
- Around line 73-94: The validation helpers validatePrompt, validateImageSize,
and validateNumberOfImages are inside the try block, causing user-input
validation failures to be logged as fatal in logger.errors; move these three
calls so they execute before the try (leave buildRequest,
this.buildRequest(options) and the SDK call this.client.images.generate({...})
inside the try), or narrow the catch to only wrap the provider call so that
logger.errors('openai.generateImage fatal', ...) only records actual
provider/transport errors and not caller validation errors.
In `@packages/typescript/ai-openai/src/adapters/summarize.ts`:
- Around line 103-107: The logger.request call in summarize.ts currently logs
stream: true only in the metadata object but not in the formatted message
string; update the logger.request invocation (the call that logs
"activity=summarize provider=openai") to include "stream=true" in that message
string (mirroring the pattern used in text.ts) so the console message itself
indicates streaming, while leaving the metadata object (provider, model, stream)
intact.
In `@packages/typescript/ai-openai/src/adapters/video.ts`:
- Around line 112-125: The validation errors from validateVideoSize and
validateVideoSeconds are being caught and logged as fatal by logger.errors
inside the try/catch around the OpenAI SDK call; move these validation calls
outside the try block (or at least before the SDK invocation) so client-side
validation errors are thrown before the try/catch, or alternatively only call
logger.errors after the SDK call fails (e.g., check the error origin and skip
logging for validation errors). Update the createVideoJob flow to call
validateVideoSize and validateVideoSeconds prior to entering the try that calls
the OpenAI/Sora SDK and ensure logger.errors('openai.createVideoJob fatal', ...)
is only used for SDK/runtime errors, not validation exceptions.
In `@packages/typescript/ai-openai/src/realtime/adapter.ts`:
- Around line 147-153: The fatal log string "openai.realtime fatal" is reused
across multiple failure paths, making filtering hard; update each handler to use
a distinct log/error tag: change the dataChannel.onerror block
(dataChannel.onerror, emit('error')), the SDP establishment failure code path
(where SDP negotiation fails), the tool-call IDs check (where missing tool-call
ids are detected), and the autoplay failure path to use unique messages such as
"openai.realtime datachannel error", "openai.realtime sdp failed",
"openai.realtime tool_call missing ids", and "openai.realtime autoplay failed"
respectively, and mirror the same change in the ElevenLabs adapter so logs and
emitted errors include the new, specific strings while preserving existing error
objects and emit('error') behavior.
In `@packages/typescript/ai-openrouter/src/adapters/image.ts`:
- Around line 68-79: The logger on ImageGenerationOptions is required so do not
add optional chaining; ensure the adapter continues to call logger.request(...)
directly (keep the current logger.request call), confirm ImageGenerationOptions
declares logger as a non-optional InternalLogger, and avoid changing
generateImages or generateImage/index.ts to accept a nullable logger—only add
defensive ?.request() if you intentionally want to support untyped/raw callers.
In `@packages/typescript/ai-openrouter/src/adapters/text.ts`:
- Around line 171-175: The catch in openrouter.chatStream currently
unconditionally logs fatal errors; change it to detect RequestAbortedError
(e.g., error.name === 'RequestAbortedError' or instanceof RequestAbortedError)
and avoid calling logger.errors for aborts (use a softer log or no log), only
call logger.errors for true failures; apply the same adjustment in the
structuredOutput catch (lines handling RequestAbortedError at the
structuredOutput block) so aborts are treated as soft/user cancellations and not
logged as fatal.
In `@packages/typescript/ai/src/activities/chat/adapter.ts`:
- Around line 20-27: The JSDoc on StructuredOutputOptions incorrectly instructs
per-chunk logging (logger.provider() "for each chunk received") even though
StructuredOutputOptions is used by non-streaming structuredOutput; update the
docs to only require logger.request() and logger.errors() (or note
logger.provider() may be used for the single response if intended), and move the
per-chunk guidance (call logger.provider() for each chunk) into the
streaming-specific docs for TextAdapter.chatStream / BaseTextAdapter.chatStream
so chunked logging guidance applies only to chatStream implementations.
In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 1481-1492: The inline cast on the params object weakens the
required logger contract; instead of using "as TextOptions<...>" when
constructing TextEngine, create an explicitly typed params binding (e.g., const
params: TextOptions<Record<string, any>, Record<string, any>> = {
...textOptions, model, logger }) and pass that params variable into the
TextEngine constructor (and do the same for the other site around the code that
builds params for TextEngine), which will ensure the compiler enforces the
required logger property rather than relying on a cast.
- Around line 594-599: The hot-path builds the log message and meta for every
output chunk even when the 'output' logger category is disabled; wrap the logger
call in a fast check such as if (this.logger.isEnabled('output')) before calling
this.logger.output(...) inside the loop that iterates outputChunks (the block
that yields outputChunk, calls this.handleStreamChunk(outputChunk) and
increments this.middlewareCtx.chunkIndex) so the template literal and meta
object are only constructed when logging is enabled.
In `@packages/typescript/ai/src/activities/chat/middleware/compose.ts`:
- Around line 146-225: The current onChunk loop eagerly builds template-literal
messages and meta objects for this.logger.middleware on every middleware×chunk
even when the middleware logging category is disabled; wrap each logger call
(the pre-call log and the three post-call logs inside the result branches) with
a guard like if (this.logger.middleware?.isEnabled?.()) (or the appropriate
logger-isEnabled predicate) so the template strings and object literals are only
constructed when logging is enabled; keep the existing
shouldSkipInstrumentation/aiEventClient.emit logic unchanged but avoid
allocation by checking the logger-enabled predicate before creating message/meta
for this.logger.middleware calls in the onChunk handler for this.middlewares.
In `@packages/typescript/ai/src/activities/generateSpeech/index.ts`:
- Around line 146-171: The code inconsistently resolves provider and mixes use
of options.debug vs the destructured debug; fix by (1) simplifying provider
resolution to use adapter.name only (e.g., set providerName = adapter.name ??
'unknown') and update the aiEventClient.emit call to use that same providerName
so provider is consistent across events/logs (references: providerName,
adapter.name, aiEventClient.emit), and (2) make the debug handling consistent by
removing the unused debug alias (_debug) in the destructuring and using the
destructured debug when calling resolveDebugOption (references: the const
destructure line and resolveDebugOption(options.debug) — change to
resolveDebugOption(debug)).
In `@packages/typescript/ai/src/activities/generateVideo/index.ts`:
- Around line 253-307: The code duplicates logger and provider-name derivation
in runCreateVideoJob, runStreamingVideoGeneration, and generateTranscription;
extract a small helper (e.g. resolveLoggerAndProvider(adapter, debug)) that
returns { logger: InternalLogger, providerName: string } by encapsulating
resolveDebugOption(options.debug) and the provider/name fallback logic ((adapter
as { provider?: string }).provider ?? (adapter as { name?: string }).name ??
'unknown'); then replace the duplicated blocks in runCreateVideoJob,
runStreamingVideoGeneration, and generateTranscription to call this helper and
use the returned logger and providerName for logging and metadata.
In `@packages/typescript/ai/tests/logger/resolve.test.ts`:
- Line 2: The import ConsoleLogger in resolve.test.ts is unused and only kept
with a void ConsoleLogger escape at the end; remove the import and the void
escape to clean up the test. Locate the import statement for ConsoleLogger and
the trailing `void ConsoleLogger` reference (around the bottom of the file) and
delete both; if you want an explicit assertion instead of removing it, expose
the underlying logger from InternalLogger (e.g., a getter) and add a test that
asserts the concrete logger instance rather than keeping the unused import.
In `@testing/e2e/src/routes/api.debug-logging.ts`:
- Around line 28-29: The code currently unsafely casts body?.debug to
DebugOption and trusts body?.userMessage; replace the blind casts by runtime
shape checks: read rawDebug = body?.debug and set debug to undefined unless
rawDebug is a boolean or a non-null object (then cast to DebugOption), and set
userMessage to body?.userMessage only if typeof === 'string' otherwise fall back
to '[debug-logging] hello'; update the variables DebugOption, debug, userMessage
and remove the direct "as" cast so malformed JSON is handled locally.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e7bc57ea-07c9-4b6b-8c75-589228428745
📒 Files selected for processing (71)
docs/advanced/debug-logging.mddocs/advanced/extend-adapter.mddocs/advanced/middleware.mddocs/advanced/multimodal-content.mddocs/advanced/observability.mddocs/advanced/per-model-type-safety.mddocs/advanced/runtime-adapter-switching.mddocs/advanced/tree-shaking.mddocs/config.jsonpackages/typescript/ai-anthropic/src/adapters/summarize.tspackages/typescript/ai-anthropic/src/adapters/text.tspackages/typescript/ai-elevenlabs/src/realtime/adapter.tspackages/typescript/ai-elevenlabs/src/realtime/types.tspackages/typescript/ai-fal/src/adapters/image.tspackages/typescript/ai-fal/src/adapters/video.tspackages/typescript/ai-gemini/src/adapters/image.tspackages/typescript/ai-gemini/src/adapters/summarize.tspackages/typescript/ai-gemini/src/adapters/text.tspackages/typescript/ai-gemini/src/adapters/tts.tspackages/typescript/ai-gemini/tests/image-adapter.test.tspackages/typescript/ai-grok/src/adapters/image.tspackages/typescript/ai-grok/src/adapters/summarize.tspackages/typescript/ai-grok/src/adapters/text.tspackages/typescript/ai-grok/tests/grok-adapter.test.tspackages/typescript/ai-groq/src/adapters/text.tspackages/typescript/ai-groq/tests/groq-adapter.test.tspackages/typescript/ai-ollama/src/adapters/summarize.tspackages/typescript/ai-ollama/src/adapters/text.tspackages/typescript/ai-openai/src/adapters/image.tspackages/typescript/ai-openai/src/adapters/summarize.tspackages/typescript/ai-openai/src/adapters/text.tspackages/typescript/ai-openai/src/adapters/transcription.tspackages/typescript/ai-openai/src/adapters/tts.tspackages/typescript/ai-openai/src/adapters/video.tspackages/typescript/ai-openai/src/realtime/adapter.tspackages/typescript/ai-openai/src/realtime/types.tspackages/typescript/ai-openai/tests/image-adapter.test.tspackages/typescript/ai-openrouter/src/adapters/image.tspackages/typescript/ai-openrouter/src/adapters/summarize.tspackages/typescript/ai-openrouter/src/adapters/text.tspackages/typescript/ai-openrouter/tests/image-adapter.test.tspackages/typescript/ai-openrouter/tests/openrouter-adapter.test.tspackages/typescript/ai/package.jsonpackages/typescript/ai/src/activities/chat/adapter.tspackages/typescript/ai/src/activities/chat/index.tspackages/typescript/ai/src/activities/chat/middleware/compose.tspackages/typescript/ai/src/activities/generateImage/index.tspackages/typescript/ai/src/activities/generateSpeech/index.tspackages/typescript/ai/src/activities/generateTranscription/index.tspackages/typescript/ai/src/activities/generateVideo/index.tspackages/typescript/ai/src/activities/summarize/index.tspackages/typescript/ai/src/adapter-internals.tspackages/typescript/ai/src/index.tspackages/typescript/ai/src/logger/console-logger.tspackages/typescript/ai/src/logger/index.tspackages/typescript/ai/src/logger/internal-logger.tspackages/typescript/ai/src/logger/resolve.tspackages/typescript/ai/src/logger/types.tspackages/typescript/ai/src/types.tspackages/typescript/ai/tests/debug-logging-activities.test.tspackages/typescript/ai/tests/debug-logging-chat.test.tspackages/typescript/ai/tests/logger/console-logger.test.tspackages/typescript/ai/tests/logger/internal-logger.test.tspackages/typescript/ai/tests/logger/resolve.test.tspackages/typescript/ai/tests/logger/types.test.tspackages/typescript/ai/tests/stream-generation.test.tspackages/typescript/ai/vite.config.tstesting/e2e/fixtures/debug-logging/basic.jsontesting/e2e/src/routeTree.gen.tstesting/e2e/src/routes/api.debug-logging.tstesting/e2e/tests/debug-logging.spec.ts
| /** | ||
| * Enable debug logging for this adapter. | ||
| * | ||
| * - `true` enables all categories (`request`, `response`, `provider`, `errors`). | ||
| * - A {@link DebugConfig} object selects categories and/or a custom sink. | ||
| */ | ||
| debug?: DebugOption |
There was a problem hiding this comment.
JSDoc lists a non-existent category.
Per DebugCategories in packages/typescript/ai/src/logger/types.ts, the categories are request, provider, output, middleware, tools, agentLoop, config, errors — there is no response category. Update the doc to reference the real set (e.g., request, provider, output, errors) for accuracy.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-elevenlabs/src/realtime/types.ts` around lines 28 -
34, The JSDoc for the debug option incorrectly mentions a non-existent
`response` category; update the comment above the `debug?: DebugOption`
declaration to list the actual DebugCategories (for example `request`,
`provider`, `output`, `errors`) or refer to `DebugConfig`/`DebugCategories` for
the canonical set so it matches the types defined in
packages/typescript/ai/src/logger/types.ts; ensure the comment text mentions
`DebugConfig` and `DebugOption` and uses the real category names (`request`,
`provider`, `output`, `middleware`, `tools`, `agentLoop`, `config`, `errors`) or
a short example subset (e.g., `request`, `provider`, `output`, `errors`) for
accuracy.
| const { model, prompt, numberOfImages, size, logger } = options | ||
|
|
||
| // Validate inputs | ||
| validatePrompt({ prompt, model }) | ||
| validateImageSize(model, size) | ||
| validateNumberOfImages(model, numberOfImages) | ||
| logger.request(`activity=generateImage provider=grok model=${this.model}`, { | ||
| provider: 'grok', | ||
| model: this.model, | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the optionality of `logger` in ImageGenerationOptions / TTSOptions / etc.
rg -nP -C3 'interface\s+(ImageGenerationOptions|TTSOptions|TranscriptionOptions|VideoGenerationOptions|SummarizeOptions)\b' packages/typescript/ai/src --type=ts
# Find every adapter method that destructures logger from options
rg -nP -C1 'const\s*\{[^}]*\blogger\b[^}]*\}\s*=\s*options' --type=tsRepository: TanStack/ai
Length of output: 10284
🏁 Script executed:
# Get full ImageGenerationOptions interface definition with line numbers
sed -n '1033,1070p' packages/typescript/ai/src/types.tsRepository: TanStack/ai
Length of output: 1159
🏁 Script executed:
# Get full TTSOptions interface definition
sed -n '1165,1200p' packages/typescript/ai/src/types.tsRepository: TanStack/ai
Length of output: 1192
🏁 Script executed:
# Get full grok image.ts context around the logger usage
sed -n '55,75p' packages/typescript/ai-grok/src/adapters/image.tsRepository: TanStack/ai
Length of output: 704
🏁 Script executed:
# Check how resolveDebugOption is implemented and used
rg -n 'resolveDebugOption' packages/typescript/ai/src --type=ts -A 3Repository: TanStack/ai
Length of output: 7754
🏁 Script executed:
# Check if any adapter provides a fallback for logger
rg -n 'logger\s*\?\?|logger.*??.*resolveDebugOption' packages/typescript/ai-*/src/adapters --type=tsRepository: TanStack/ai
Length of output: 37
🏁 Script executed:
# Check if GrokImageAdapter is exported publicly
rg -n 'export.*GrokImageAdapter' packages/typescript/ai-grok/src --type=tsRepository: TanStack/ai
Length of output: 138
🏁 Script executed:
# Check the package exports
cat packages/typescript/ai-grok/package.json | jq '.exports'Repository: TanStack/ai
Length of output: 145
🏁 Script executed:
# See if there's documentation about using adapters directly
find packages/typescript/ai-grok/src -name '*.ts' -exec grep -l 'directly\|instantiate' {} \;Repository: TanStack/ai
Length of output: 108
🏁 Script executed:
# Check the main export file
cat packages/typescript/ai-grok/src/index.tsRepository: TanStack/ai
Length of output: 1582
🏁 Script executed:
# Look for any exports or re-exports of adapters
rg -n 'export.*Adapter|from.*adapters' packages/typescript/ai-grok/src/index.tsRepository: TanStack/ai
Length of output: 144
Guard against missing logger when adapter is called directly.
logger is declared as required in ImageGenerationOptions but pulled off options and invoked unconditionally at line 61. GrokImageAdapter is publicly exported, so if a consumer uses GrokImageAdapter.generateImages(...) directly rather than through the generateImage activity (which injects an InternalLogger via resolveDebugOption), logger will be undefined and throws a TypeError. Add a fallback to resolveDebugOption(undefined) here.
🛠️ Proposed fix
- const { model, prompt, numberOfImages, size, logger } = options
+ const { model, prompt, numberOfImages, size, logger: providedLogger } = options
+ const logger = providedLogger ?? resolveDebugOption(undefined)(import resolveDebugOption from @tanstack/ai/adapter-internals)
Same concern applies symmetrically to all other provider adapters that destructure logger from options in this PR. A more centralized fix would default logger in the base adapter class or make it non-optional only in the activity entry points, not the options type.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const { model, prompt, numberOfImages, size, logger } = options | |
| // Validate inputs | |
| validatePrompt({ prompt, model }) | |
| validateImageSize(model, size) | |
| validateNumberOfImages(model, numberOfImages) | |
| logger.request(`activity=generateImage provider=grok model=${this.model}`, { | |
| provider: 'grok', | |
| model: this.model, | |
| }) | |
| const { model, prompt, numberOfImages, size, logger: providedLogger } = options | |
| const logger = providedLogger ?? resolveDebugOption(undefined) | |
| logger.request(`activity=generateImage provider=grok model=${this.model}`, { | |
| provider: 'grok', | |
| model: this.model, | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-grok/src/adapters/image.ts` around lines 59 - 64, The
call unconditionally uses logger from ImageGenerationOptions which can be
undefined when GrokImageAdapter.generateImages(...) is invoked directly; change
the code to ensure a fallback logger by calling resolveDebugOption(undefined)
when logger is missing (import resolveDebugOption from
`@tanstack/ai/adapter-internals`) and use that resolved logger for the
logger.request call; update the destructuring/usage in
GrokImageAdapter.generateImages (and mirror the same pattern in other provider
adapters that destructure logger) so logger = logger ??
resolveDebugOption(undefined) before invoking logger.request.
| logger.provider(`provider=openai type=${chunk.type ?? '<unknown>'}`, { | ||
| chunk, | ||
| }) |
There was a problem hiding this comment.
Logging the full raw chunk may leak user content and is heavy on hot paths.
logger.provider(..., { chunk }) forwards the complete provider stream event on every iteration. For OpenAI Responses streams this includes prompt text, tool arguments, reasoning, etc. — potentially sensitive content that ends up in any custom Logger (pino, files, remote sinks). It also allocates a wrapper object per chunk even when the provider category is disabled, since the check happens inside InternalLogger.emit.
Consider either:
- Gating the meta construction with
logger.isEnabled('provider')before building{ chunk }, and/or - Narrowing the payload to non-sensitive fields (e.g.,
type,output_index,item_id) by default and offering full-chunk dump as an opt-in.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-openai/src/adapters/text.ts` around lines 283 - 285,
The current logger.provider call forwards the entire raw chunk (variable chunk)
which may leak sensitive content and allocates an object every iteration; update
the code around the logger.provider call in
packages/typescript/ai-openai/src/adapters/text.ts to first check
logger.isEnabled('provider') before constructing heavy payloads, and when
logging by default pass a minimal safe meta object (e.g., { type: chunk.type,
output_index: chunk.output_index, item_id: chunk.item_id }) instead of { chunk
}, and only include the full chunk when an explicit opt-in flag is set or when
logger.isEnabled('provider_full') (or similar) returns true so you avoid
allocations and sensitive data exposure in hot paths; reference the
logger.provider call and the chunk variable when making these changes.
| audioElement.play().catch((e) => { | ||
| console.warn('Audio autoplay failed:', e) | ||
| logger.errors('openai.realtime audio autoplay failed', { | ||
| error: e, | ||
| source: 'openai.realtime', | ||
| }) | ||
| }) |
There was a problem hiding this comment.
Audio autoplay failure shouldn't be logged at errors level.
Browser autoplay policies routinely block Audio.play() until a user gesture has occurred — this is an expected, recoverable condition (the surrounding .catch() even acknowledges this). Emitting it as logger.errors(...) will light up downstream error alerts and Sentry-style pipelines for something that's not a fault of the integration.
The root cause is that InternalLogger currently only exposes category methods that map to debug or error levels (see packages/typescript/ai/src/logger/internal-logger.ts), with no middle tier for warnings — even though the public Logger interface defines warn. Either add a warn-tier category method on InternalLogger or downgrade this call to use the provider category so it lands on debug instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-openai/src/realtime/adapter.ts` around lines 393 -
398, The audio autoplay failure is an expected, non-fatal condition; replace the
current logger.errors call in the audioElement.play().catch(...) block with a
warn-level call by adding a warn-tier method to InternalLogger (in
packages/typescript/ai/src/logger/internal-logger.ts) and then calling
logger.warn in packages/typescript/ai-openai/src/realtime/adapter.ts where
logger.errors is used; ensure the new InternalLogger.warn maps to the
appropriate warn-level provider so this case does not trigger error-alert
pipelines.
| it('debug: false on non-chat activity silences errors too', async () => { | ||
| const adapter = { | ||
| kind: 'summarize' as const, | ||
| name: 'mock', | ||
| model: 'mock-model', | ||
| summarize: vi.fn(async () => { | ||
| throw new Error('silent boom') | ||
| }), | ||
| } | ||
|
|
||
| await expect( | ||
| summarize({ | ||
| adapter: adapter as any, | ||
| text: 'x', | ||
| debug: { | ||
| logger: logger as unknown as Logger, | ||
| errors: false, | ||
| provider: false, | ||
| output: false, | ||
| request: false, | ||
| }, | ||
| }), | ||
| ).rejects.toThrow('silent boom') | ||
|
|
||
| expect(logger.debug).not.toHaveBeenCalled() | ||
| expect(logger.error).not.toHaveBeenCalled() | ||
| }) |
There was a problem hiding this comment.
Test title doesn't match what's being exercised.
The test is named 'debug: false on non-chat activity silences errors too', but it actually passes a DebugConfig object with every category set to false (not debug: false). These are two distinct code paths in resolveDebugOption (boolean false vs an object with all flags false). Consider splitting into two tests, or renaming this one and adding a separate case that literally passes debug: false to cover the boolean branch.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai/tests/debug-logging-activities.test.ts` around lines
212 - 238, Rename or split the test so it accurately covers both branches of
resolveDebugOption: keep this test (or rename it) to assert behavior when debug
is an object with all flags false (as exercised by the current call to summarize
with debug: { errors:false, provider:false, output:false, request:false }) and
add a separate test that passes debug: false (literal boolean) to verify the
boolean branch; reference the summarize call in the test and the
resolveDebugOption logic to ensure each case is covered and assertions on
logger.debug/logger.error remain appropriate for each branch.
| import { chat } from '../src/activities/chat/index' | ||
| import type { Logger } from '../src/logger/types' | ||
| import type { StreamChunk } from '../src/types' | ||
| import { collectChunks, createMockAdapter, ev } from './test-utils' |
There was a problem hiding this comment.
Import order lint.
ESLint import/order wants ./test-utils (value import) before the type import of ../src/logger/types. Swap the order:
♻️ Suggested fix
import { describe, expect, it, vi } from 'vitest'
import { chat } from '../src/activities/chat/index'
-import type { Logger } from '../src/logger/types'
-import type { StreamChunk } from '../src/types'
import { collectChunks, createMockAdapter, ev } from './test-utils'
+import type { Logger } from '../src/logger/types'
+import type { StreamChunk } from '../src/types'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { chat } from '../src/activities/chat/index' | |
| import type { Logger } from '../src/logger/types' | |
| import type { StreamChunk } from '../src/types' | |
| import { collectChunks, createMockAdapter, ev } from './test-utils' | |
| import { chat } from '../src/activities/chat/index' | |
| import { collectChunks, createMockAdapter, ev } from './test-utils' | |
| import type { Logger } from '../src/logger/types' | |
| import type { StreamChunk } from '../src/types' |
🧰 Tools
🪛 ESLint
[error] 5-5: ./test-utils import should occur before type import of ../src/logger/types
(import/order)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai/tests/debug-logging-chat.test.ts` around lines 2 - 5,
Reorder the import statements so value imports come before type-only imports to
satisfy ESLint import/order: move the "./test-utils" import (collectChunks,
createMockAdapter, ev) to appear before the type import of "Logger" (from
../src/logger/types); ensure the imports for chat, StreamChunk, and the
reordered lines remain unchanged except for their order.
| function createFailingMockAdapter( | ||
| message = 'mock adapter failure', | ||
| ): ReturnType<typeof createMockAdapter> { | ||
| return createMockAdapter({ | ||
| chatStreamFn: () => | ||
| (async function* (): AsyncIterable<StreamChunk> { | ||
| throw new Error(message) | ||
| })(), | ||
| }) | ||
| } |
There was a problem hiding this comment.
Silence require-yield for the intentional throw-only generator.
Static analysis flags this generator because it has no yield. The throw-before-yield shape is deliberate (exercises the error path when iteration begins), so suppress the lint locally or use an iterator that throws on first next():
♻️ Suggested fix
function createFailingMockAdapter(
message = 'mock adapter failure',
): ReturnType<typeof createMockAdapter> {
return createMockAdapter({
chatStreamFn: () =>
- (async function* (): AsyncIterable<StreamChunk> {
- throw new Error(message)
- })(),
+ // eslint-disable-next-line require-yield -- intentional: exercises error path at iteration start
+ (async function* (): AsyncIterable<StreamChunk> {
+ throw new Error(message)
+ })(),
})
}🧰 Tools
🪛 ESLint
[error] 27-29: This generator function does not have 'yield'.
(require-yield)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai/tests/debug-logging-chat.test.ts` around lines 22 -
31, The async generator in createFailingMockAdapter intentionally throws before
yielding and triggers the linter rule require-yield; either suppress the rule
for that local function or replace the generator with an iterator whose next()
immediately throws. Concretely: in createFailingMockAdapter, add an inline lint
suppression (e.g., an eslint-disable-next-line require-yield) immediately above
the async function* used for chatStreamFn, or implement chatStreamFn as an
object with an async next() that throws the Error(message) to preserve the
throw-on-first-next behavior; reference createFailingMockAdapter and the
chatStreamFn generator to locate the change.
| it('implements the Logger interface', () => { | ||
| const logger: import('../../src/logger/types').Logger = new ConsoleLogger() | ||
| expect(typeof logger.debug).toBe('function') | ||
| expect(typeof logger.info).toBe('function') | ||
| expect(typeof logger.warn).toBe('function') | ||
| expect(typeof logger.error).toBe('function') | ||
| }) |
There was a problem hiding this comment.
Replace inline import() type with a top-level import type.
ESLint is failing with @typescript-eslint/consistent-type-imports on line 49. Use a regular type import.
🔧 Proposed fix
import { afterEach, describe, expect, it, vi } from 'vitest'
import { ConsoleLogger } from '../../src/logger/console-logger'
+import type { Logger } from '../../src/logger/types'
@@
it('implements the Logger interface', () => {
- const logger: import('../../src/logger/types').Logger = new ConsoleLogger()
+ const logger: Logger = new ConsoleLogger()🧰 Tools
🪛 ESLint
[error] 49-49: import() type annotations are forbidden.
(@typescript-eslint/consistent-type-imports)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai/tests/logger/console-logger.test.ts` around lines 48 -
54, The test uses an inline import type in the variable declaration which
violates the consistent-type-imports rule; replace that inline import with a
top-level type import (e.g., add "import type { Logger } from
'../../src/logger/types'" at the top of the test file) and then change the
declaration to "const logger: Logger = new ConsoleLogger()" so the ConsoleLogger
and Logger references remain but the type import is a proper top-level type-only
import.
- Remove unnecessary optional chains and nullish coalescing on required messages array across adapter log lines - Remove unnecessary fallback on non-nullable chunk/event types - Reorder imports in elevenlabs realtime types to satisfy import/first - Delete unused packages/typescript/ai/src/logger/index.ts barrel (public surface is re-exported from src/index.ts)
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/typescript/ai-ollama/src/adapters/text.ts (1)
144-178:⚠️ Potential issue | 🟡 MinorInclude option mapping inside the logged failure path.
mapCommonOptionsToOllama()can still throw before the newtryblocks, so those adapter failures bypasslogger.errors()even though the PR intends errors to be logged unless silenced.Proposed fix
async *chatStream(options: TextOptions): AsyncIterable<StreamChunk> { - const mappedOptions = this.mapCommonOptionsToOllama(options) const { logger } = options try { + const mappedOptions = this.mapCommonOptionsToOllama(options) logger.request( `activity=chat provider=ollama model=${this.model} messages=${options.messages.length} tools=${options.tools?.length ?? 0} stream=true`, { provider: 'ollama', model: this.model }, @@ const mappedOptions = this.mapCommonOptionsToOllama(chatOptions) try { + const mappedOptions = this.mapCommonOptionsToOllama(chatOptions) logger.request( `activity=chat provider=ollama model=${this.model} messages=${chatOptions.messages.length} tools=${chatOptions.tools?.length ?? 0} stream=false`, { provider: 'ollama', model: this.model },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-ollama/src/adapters/text.ts` around lines 144 - 178, mapCommonOptionsToOllama() is called outside the try blocks so any throw bypasses logger.errors; move the mapping call inside the corresponding try (or wrap it in its own try/catch that calls logger.errors and rethrows) so failures are logged. Specifically, for the streaming method that calls mapCommonOptionsToOllama() before the try that logs provider errors and for structuredOutput (which currently does const mappedOptions = this.mapCommonOptionsToOllama(chatOptions) before its try), relocate those mapCommonOptionsToOllama(...) calls into the start of the try block (or add an immediate try/catch around each call) and ensure caught errors call logger.errors with the error object and a clear source (e.g., source: 'mapCommonOptionsToOllama') before rethrowing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-ollama/src/adapters/text.ts`:
- Around line 206-214: The catch block for structured output assumes the caught
value is an Error; update the handler in the catch of structuredOutput to safely
normalize unknown errors before logging and re-throwing: compute a safeMessage
(e.g., if error instanceof Error then error.message else String(error) or
JSON.stringify fallback), compute a safePayload for logger.errors (e.g., include
the original error when it's an Error, otherwise include a serialized value),
use those safe values in logger.errors('ollama.structuredOutput fatal', { error:
safePayload, source: 'ollama.structuredOutput' }), and throw new
Error(`Structured output generation failed: ${safeMessage}`) so no TypeError
occurs when non-Error values (null/undefined/primitives) are thrown.
---
Outside diff comments:
In `@packages/typescript/ai-ollama/src/adapters/text.ts`:
- Around line 144-178: mapCommonOptionsToOllama() is called outside the try
blocks so any throw bypasses logger.errors; move the mapping call inside the
corresponding try (or wrap it in its own try/catch that calls logger.errors and
rethrows) so failures are logged. Specifically, for the streaming method that
calls mapCommonOptionsToOllama() before the try that logs provider errors and
for structuredOutput (which currently does const mappedOptions =
this.mapCommonOptionsToOllama(chatOptions) before its try), relocate those
mapCommonOptionsToOllama(...) calls into the start of the try block (or add an
immediate try/catch around each call) and ensure caught errors call
logger.errors with the error object and a clear source (e.g., source:
'mapCommonOptionsToOllama') before rethrowing.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e46ad99d-aec9-4b65-ac3d-93cdad1798ae
📒 Files selected for processing (8)
packages/typescript/ai-anthropic/src/adapters/text.tspackages/typescript/ai-elevenlabs/src/realtime/types.tspackages/typescript/ai-gemini/src/adapters/text.tspackages/typescript/ai-grok/src/adapters/text.tspackages/typescript/ai-groq/src/adapters/text.tspackages/typescript/ai-ollama/src/adapters/text.tspackages/typescript/ai-openai/src/adapters/text.tspackages/typescript/ai-openrouter/src/adapters/text.ts
🚧 Files skipped from review as they are similar to previous changes (7)
- packages/typescript/ai-gemini/src/adapters/text.ts
- packages/typescript/ai-elevenlabs/src/realtime/types.ts
- packages/typescript/ai-openai/src/adapters/text.ts
- packages/typescript/ai-grok/src/adapters/text.ts
- packages/typescript/ai-anthropic/src/adapters/text.ts
- packages/typescript/ai-groq/src/adapters/text.ts
- packages/typescript/ai-openrouter/src/adapters/text.ts
| } catch (error: unknown) { | ||
| const err = error as Error | ||
| logger.errors('ollama.structuredOutput fatal', { | ||
| error, | ||
| source: 'ollama.structuredOutput', | ||
| }) | ||
| throw new Error( | ||
| `Structured output generation failed: ${err.message || 'Unknown error occurred'}`, | ||
| ) |
There was a problem hiding this comment.
Avoid assuming caught values are Error instances.
error is unknown; if a custom Ollama client throws null, Line 213 will throw a TypeError while constructing the wrapper error.
Proposed fix
} catch (error: unknown) {
- const err = error as Error
logger.errors('ollama.structuredOutput fatal', {
error,
source: 'ollama.structuredOutput',
})
+ const message =
+ error instanceof Error && error.message
+ ? error.message
+ : typeof error === 'string' && error
+ ? error
+ : 'Unknown error occurred'
throw new Error(
- `Structured output generation failed: ${err.message || 'Unknown error occurred'}`,
+ `Structured output generation failed: ${message}`,
)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (error: unknown) { | |
| const err = error as Error | |
| logger.errors('ollama.structuredOutput fatal', { | |
| error, | |
| source: 'ollama.structuredOutput', | |
| }) | |
| throw new Error( | |
| `Structured output generation failed: ${err.message || 'Unknown error occurred'}`, | |
| ) | |
| } catch (error: unknown) { | |
| logger.errors('ollama.structuredOutput fatal', { | |
| error, | |
| source: 'ollama.structuredOutput', | |
| }) | |
| const message = | |
| error instanceof Error && error.message | |
| ? error.message | |
| : typeof error === 'string' && error | |
| ? error | |
| : 'Unknown error occurred' | |
| throw new Error( | |
| `Structured output generation failed: ${message}`, | |
| ) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-ollama/src/adapters/text.ts` around lines 206 - 214,
The catch block for structured output assumes the caught value is an Error;
update the handler in the catch of structuredOutput to safely normalize unknown
errors before logging and re-throwing: compute a safeMessage (e.g., if error
instanceof Error then error.message else String(error) or JSON.stringify
fallback), compute a safePayload for logger.errors (e.g., include the original
error when it's an Error, otherwise include a serialized value), use those safe
values in logger.errors('ollama.structuredOutput fatal', { error: safePayload,
source: 'ollama.structuredOutput' }), and throw new Error(`Structured output
generation failed: ${safeMessage}`) so no TypeError occurs when non-Error values
(null/undefined/primitives) are thrown.
The devtools middleware is injected automatically by chat() and is already excluded from aiEventClient instrumentation via shouldSkipInstrumentation. Its per-hook logger.middleware / logger.config calls were still firing though, flooding the [tanstack-ai:middleware] category with internal plumbing. Move those calls inside the same skip gate so debug output only reflects user-provided middleware.
Debug logs surface raw provider chunks whose nested structures
(usage, output, reasoning, tools, response payloads) were being
truncated to [Object] / [Array] because Node's default console
formatting stops at depth 2. ConsoleLogger now lazily loads
node:util and runs meta through inspect({ depth: null }) on Node so
the entire structure renders. Browsers still get the raw object for
interactive DevTools inspection.
console.dir is the purpose-built native API for depth-unlimited object
inspection. It takes the same {depth, colors} options natively on Node
and is a no-op/interactive-tree in browsers, so we get the expanded
output in both environments without any dynamic import dance around
node:util.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
packages/typescript/ai/tests/logger/console-logger.test.ts (1)
1-3:⚠️ Potential issue | 🟡 MinorMove
Loggerto a top-level type import.Line 58 still violates
@typescript-eslint/consistent-type-imports; use a type-only import instead of the inlineimport()annotation.🔧 Proposed fix
import { afterEach, describe, expect, it, vi } from 'vitest' import { ConsoleLogger } from '../../src/logger/console-logger' +import type { Logger } from '../../src/logger/types' @@ it('implements the Logger interface', () => { - const logger: import('../../src/logger/types').Logger = new ConsoleLogger() + const logger: Logger = new ConsoleLogger()Also applies to: 57-58
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/tests/logger/console-logger.test.ts` around lines 1 - 3, The test imports ConsoleLogger but uses an inline import() annotation for the Logger type which violates consistent-type-imports; replace the inline type import with a top-level type-only import like "import type { Logger } from '...'" so that the Logger type is imported at the top (alongside the existing ConsoleLogger import) rather than using import() inside the test (affecting the Logger usage around the ConsoleLogger test cases).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@packages/typescript/ai/tests/logger/console-logger.test.ts`:
- Around line 1-3: The test imports ConsoleLogger but uses an inline import()
annotation for the Logger type which violates consistent-type-imports; replace
the inline type import with a top-level type-only import like "import type {
Logger } from '...'" so that the Logger type is imported at the top (alongside
the existing ConsoleLogger import) rather than using import() inside the test
(affecting the Logger usage around the ConsoleLogger test cases).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0119be30-104a-4337-b334-93923cf371ab
📒 Files selected for processing (2)
packages/typescript/ai/src/logger/console-logger.tspackages/typescript/ai/tests/logger/console-logger.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/typescript/ai/src/logger/console-logger.ts
Makes it trivial to visually scan dense streaming logs — each category tag is now bracketed by its own emoji on both sides, e.g. '📨 [tanstack-ai:output] 📨 ...'. Mapping: request=📤, provider=📥, output=📨, middleware=🧩, tools=🔧, agentLoop=🔁, config=⚙️, errors=❌. Tests that asserted on the raw tag via startsWith were switched to includes so they remain robust to prefix changes.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
packages/typescript/ai/tests/debug-logging-chat.test.ts (1)
252-273:debugSpyis created but never asserted.Minor:
debugSpyat line 254 suppresses console output but no assertion is made against it — unlike the prior test at lines 175-199. Either drop it or add a check that no[tanstack-ai:debug lines leaked through for the failing-adapter path to strengthen the case.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/tests/debug-logging-chat.test.ts` around lines 252 - 273, The test creates debugSpy but never asserts on it; update the failing-adapter test that uses createFailingMockAdapter('boom-default') and collects the chat stream (chat, collectChunks, StreamChunk) to either remove debugSpy or add an assertion that no debug logs from the library leaked through (e.g., inspect debugSpy.mock.calls for absence of messages containing '[tanstack-ai:' similar to the existing errSpy assertion using logPrefixes). Restore/mockRestore for debugSpy as already present.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/typescript/ai/tests/debug-logging-chat.test.ts`:
- Around line 252-273: The test creates debugSpy but never asserts on it; update
the failing-adapter test that uses createFailingMockAdapter('boom-default') and
collects the chat stream (chat, collectChunks, StreamChunk) to either remove
debugSpy or add an assertion that no debug logs from the library leaked through
(e.g., inspect debugSpy.mock.calls for absence of messages containing
'[tanstack-ai:' similar to the existing errSpy assertion using logPrefixes).
Restore/mockRestore for debugSpy as already present.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e472c024-1f9e-4322-9fb4-7aa3a255b460
📒 Files selected for processing (5)
packages/typescript/ai/src/logger/internal-logger.tspackages/typescript/ai/tests/debug-logging-activities.test.tspackages/typescript/ai/tests/debug-logging-chat.test.tspackages/typescript/ai/tests/logger/internal-logger.test.tspackages/typescript/ai/tests/logger/resolve.test.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/typescript/ai/tests/logger/resolve.test.ts
- packages/typescript/ai/tests/logger/internal-logger.test.ts
Library-level unit tests in the @tanstack/ai test suite already cover the debug logging behaviour (logger wiring, category resolution, console.dir formatting, emoji prefixing). An e2e round-trip added no independent coverage, so drop the spec, its API route, its fixture, and the now-stale routeTree entry.
…g-meadow # Conflicts: # packages/typescript/ai-anthropic/src/adapters/summarize.ts # packages/typescript/ai-anthropic/src/adapters/text.ts # packages/typescript/ai-gemini/src/adapters/summarize.ts # packages/typescript/ai-gemini/src/adapters/text.ts # packages/typescript/ai-ollama/src/adapters/summarize.ts # packages/typescript/ai-ollama/src/adapters/text.ts # packages/typescript/ai-openai/src/adapters/text.ts # packages/typescript/ai-openrouter/src/adapters/summarize.ts # packages/typescript/ai/src/activities/chat/index.ts # packages/typescript/ai/src/activities/chat/middleware/compose.ts # packages/typescript/ai/src/types.ts
- @tanstack/ai: minor — new debug option on every activity, Logger / ConsoleLogger / DebugOption public surface, @tanstack/ai/adapter-internals subpath, emoji-prefixed category tags, console.dir-based meta formatting - All provider adapters: patch — wire adapters through the InternalLogger so request/provider/errors flow through the structured logger; drop leftover console.* calls in adapter catch blocks
Summary
Adds a pluggable, category-toggleable debug logging system covering every activity (
chat,summarize,generateImage,generateSpeech,generateTranscription,generateVideo) and all 25 provider adapters across 9 packages (OpenAI, Anthropic, Gemini, Grok, Groq, Ollama, OpenRouter, fal, ElevenLabs).Public API
Loggerinterface — four methods (debug/info/warn/error), each accepting(message: string, meta?: Record<string, unknown>).ConsoleLogger— default implementation routing to matchingconsole.*methods.DebugCategories/DebugConfig/DebugOptiontypes.debug?: DebugOption:Categories
request/provider/output/middleware/tools/agentLoop/config/errors. Each emits[tanstack-ai:<category>] ...with optional structured meta. Chat-only categories (middleware/tools/agentLoop/config) simply don't fire in non-chat activities. Errors log unconditionally unless explicitly silenced (debug: falseordebug: { errors: false }).Architecture
packages/typescript/ai/src/logger/module housing public types,ConsoleLogger, and internalInternalLogger/resolveDebugOption.@tanstack/ai/adapter-internalssubpath export gives provider packages access toInternalLoggerandresolveDebugOptionwithout leaking them publicly.TextEngine,MiddlewareRunner, and each activity entry point thread a resolvedInternalLoggerthrough the pipeline — no globals, concurrent calls are independent.console.*calls in adapter catch blocks (OpenAI text / realtime, ElevenLabs realtime) have been migrated tologger.errors(...).Test Plan
@tanstack/aiunit tests: 683 passing (+17 new)testing/e2e/tests/debug-logging.spec.ts— 3 scenarios (debug: true, debug: { middleware: false }, debug: false)docs/advanced/debug-logging.md+ cross-links fromobservability.mdandmiddleware.md+ nav entrypnpm test:docsclean (no broken links)console.*calls remain in adapter source filesSummary by CodeRabbit
New Features
Documentation
Tests