feat: add Mistral provider adapter for TanStack AI#462
feat: add Mistral provider adapter for TanStack AI#462cstrnt wants to merge 2 commits intoTanStack:mainfrom
Conversation
- Implemented MistralTextAdapter and related functions for chat completions. - Defined Mistral-specific message types and metadata structures. - Created model metadata for various Mistral models with pricing and capabilities. - Added text provider options and validation for Mistral text models. - Developed utility functions for Mistral client configuration and schema conversion. - Implemented function tool conversion for Mistral-specific formats. - Added tests for Mistral adapters, including event emissions and error handling. - Configured TypeScript and Vite for the new package.
📝 WalkthroughWalkthroughAdds a new package Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Adapter as MistralTextAdapter
participant SDK as `@mistralai/mistralai`
participant Fetch as Fetch/SSE
Client->>Adapter: generate()/chatStream(messages, options)
Adapter->>SDK: prepare request (messages, model, responseFormat/tools)
Adapter->>Fetch: open SSE stream (SDK stream URL / fetch)
Fetch-->>Adapter: SSE chunks (partial text, tool-call fragments, usage)
Adapter-->>Client: yield StreamChunk events (RUN_STARTED → TEXT_MESSAGE_* / TOOL_CALL_* → RUN_FINISHED/RUN_ERROR)
Adapter->>SDK: (on completion) parse final message / usage / structured JSON
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 14
🧹 Nitpick comments (10)
packages/typescript/ai-mistral/src/utils/client.ts (1)
17-24: TreattimeoutMs: 0explicitly and forward all relevant SDK options.
timeoutMs ? { timeoutMs }drops the value when it's0. Although0is rarely a useful timeout, the check is truthiness-based and should be an explicit!== undefinedto preserve user intent. Same nit applies toserverURLif callers pass an empty string (though that's less meaningful).Proposed fix
- const { apiKey, serverURL, timeoutMs } = config - return new Mistral({ - apiKey, - ...(serverURL ? { serverURL } : {}), - ...(timeoutMs ? { timeoutMs } : {}), - }) + const { apiKey, serverURL, timeoutMs } = config + return new Mistral({ + apiKey, + ...(serverURL !== undefined ? { serverURL } : {}), + ...(timeoutMs !== undefined ? { timeoutMs } : {}), + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/utils/client.ts` around lines 17 - 24, The createMistralClient function currently uses truthy checks which drop intentional values like timeoutMs: 0 and possibly empty serverURL; update createMistralClient to explicitly check for undefined (e.g., timeoutMs !== undefined and serverURL !== undefined) when spreading options so zero or empty-string values are preserved, and also ensure any other SDK options present on MistralClientConfig are forwarded into the new Mistral({...}) call (refer to createMistralClient and the Mistral constructor to add those properties).packages/typescript/ai-mistral/src/utils/schema-converter.ts (1)
8-29:transformNullsToUndefinedwill mangleDate,Map,Set, class instances, etc.Because
typeofreturns'object'for those, they'll be rebuilt as plainRecord<string, unknown>viaObject.entries. This is likely fine for JSON-parsed Mistral responses (which won't contain such values), but worth a brief code comment or a plain-object guard (Object.getPrototypeOf(obj) === Object.prototype) to avoid surprises if this util gets reused.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/utils/schema-converter.ts` around lines 8 - 29, transformNullsToUndefined currently treats any typeof === 'object' as a plain object and will rebuild Dates, Maps, Sets and class instances via Object.entries; add a plain-object guard (e.g. check Object.getPrototypeOf(obj) === Object.prototype or an isPlainObject helper) before iterating entries and only recurse/construct a Record for true plain objects, otherwise return the original object unchanged, and add a short comment explaining the guard to prevent mangling Date/Map/Set/class instances; reference the transformNullsToUndefined function and the block that uses Object.entries(obj as Record<string, unknown>) for where to apply the change.packages/typescript/ai-mistral/src/model-meta.ts (2)
33-207: Minor: consider a single data-driven table instead of per-model constants.Eleven nearly-identical
const X = { ... } as const satisfies ModelMeta<…>blocks, plus three parallel lists/maps (MISTRAL_CHAT_MODELS,MistralModelInputModalitiesByName,MistralChatModelProviderOptionsByName) make adding/removing a model a multi-place edit that's easy to get out of sync. Consider defining a singleMISTRAL_MODELS = { 'mistral-large-latest': {...}, ... } as const satisfies Record<string, ModelMeta<MistralTextProviderOptions>>and deriving the union / modality map from it. Not blocking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/model-meta.ts` around lines 33 - 207, Replace the many per-model constants (e.g., MISTRAL_LARGE_LATEST, MISTRAL_MEDIUM_LATEST, ... OPEN_MISTRAL_NEMO) and the parallel collections (MISTRAL_CHAT_MODELS, MistralModelInputModalitiesByName, MistralChatModelProviderOptionsByName) with a single data-driven map like MISTRAL_MODELS typed as const satisfies Record<string, ModelMeta<MistralTextProviderOptions>>; then derive the chat model list and the modality/provider-option maps from that single MISTRAL_MODELS object so adding/removing a model only requires updating one place and keeps types (and unions) inferred from the map.
251-261: Per-model provider options don't actually differ.
MistralChatModelProviderOptionsByNamemaps every model to the sameMistralTextProviderOptions, soResolveProviderOptions<TModel>is effectively a constant. This means the model-specific provider-options machinery adds indirection without providing any per-model type safety. If Mistral truly has homogeneous request params today, consider collapsingResolveProviderOptionsdown to justMistralTextProviderOptions(and dropping the map) — you can reintroduce per-model specialization later if a model gains unique options.As per coding guidelines: "Provide type safety per model by using model-specific provider options in adapter packages".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/model-meta.ts` around lines 251 - 261, The current MistralChatModelProviderOptionsByName maps every entry of MISTRAL_CHAT_MODELS to the same MistralTextProviderOptions, making ResolveProviderOptions<TModel> redundant; replace the indirection by changing ResolveProviderOptions to simply be MistralTextProviderOptions (and remove or deprecate MistralChatModelProviderOptionsByName), i.e., eliminate the map and export ResolveProviderOptions = MistralTextProviderOptions so the code uses a single concrete provider options type while preserving the ability to reintroduce per-model mappings later if needed (refer to the symbols MistralChatModelProviderOptionsByName, ResolveProviderOptions, MistralTextProviderOptions, and MISTRAL_CHAT_MODELS to locate and update the types).packages/typescript/ai-mistral/src/text/text-provider-options.ts (1)
110-114: No-op validator.
validateTextProviderOptionsis a stub with the comment acknowledging Mistral handles validation server-side. That's fine, but consider at least validating obvious client-catchable issues to fail fast (e.g.,n && n < 1,temperaturerange,messages.length > 0) — these avoid a round-trip to the API for trivial mistakes. Deferring entirely to Mistral means users get slower, less actionable errors.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/text/text-provider-options.ts` around lines 110 - 114, The validator validateTextProviderOptions currently does nothing; add fast client-side checks inside that function (referencing validateTextProviderOptions and type InternalTextProviderOptions) to throw clear errors for obvious misconfigurations: ensure numeric counts like n are integers >= 1, temperature is within the allowed range (e.g. 0–1 or your provider's accepted bounds), messages (or prompt) is present and messages.length > 0, and any other obvious bounds (e.g. max_tokens/top_p if present) are within valid ranges; throw descriptive Error instances so callers fail fast instead of waiting for the Mistral API round-trip.packages/typescript/ai-mistral/src/tools/function-tool.ts (1)
27-42: Remove the redundantadditionalProperties: falseassignment at line 32.
makeMistralStructuredOutputCompatiblealready setsadditionalProperties = falseinside theif (result.type === 'object')block (schema-converter.ts line 91). The assignment here is redundant for objects and incorrectly applied to non-object types where it has no semantic meaning.The type compatibility is fine—
FunctionParametersis{ [key: string]: unknown }, which accepts theRecord<string, any>return type ofmakeMistralStructuredOutputCompatiblewithout requiring a cast.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/tools/function-tool.ts` around lines 27 - 42, Remove the redundant assignment to jsonSchema.additionalProperties = false in the function-tool return path: the helper makeMistralStructuredOutputCompatible already sets additionalProperties = false for object schemas (see makeMistralStructuredOutputCompatible), and setting it here can incorrectly apply to non-object types; simply delete the line that assigns additionalProperties and leave the returned object (type: 'function', function: { name: tool.name, description: tool.description, parameters: jsonSchema, strict: true }) as-is (it remains compatible with FunctionTool/FunctionParameters).packages/typescript/ai-mistral/tests/mistral-adapter.test.ts (1)
378-406: Chunk shape is correct, but test setup is inconsistent.The pre-wrapped
{ data: {...} }shape at lines 378-406 is correct—the Mistral SDK wraps stream events with a.dataproperty. However, this test manually wraps chunks while most others (lines 119, 222, 428) pass raw chunks tosetupMockStream, which does the wrapping. Both approaches work and produce the correct shape, but for consistency, usesetupMockStreamhere as well instead of manually creatingerrorIterablewith pre-wrapped chunks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts` around lines 378 - 406, The test manually constructs an async iterable named errorIterable with pre-wrapped streamChunks, which is inconsistent with other tests; replace the manual async iterable with a call to setupMockStream(streamChunks) so the helper performs the required .data wrapping and error behavior, updating references to errorIterable and reusing the existing setupMockStream helper used elsewhere in this test suite.packages/typescript/ai-mistral/src/adapters/text.ts (2)
93-101: Redundantkindoverride and dropped config.
readonly kind = 'text' as constis already declared onBaseTextAdapter(seepackages/typescript/ai-adapters/.../adapter.tsline 116 in the provided snippet), so the override at Line 93 is dead code. More importantly,super({}, model)at Line 99 discards the caller'sconfig— any future base-class behavior that depends onTextAdapterConfig(timeouts, headers, etc.) will silently ignore user configuration.- readonly kind = 'text' as const readonly name = 'mistral' as const private client: Mistral constructor(config: MistralTextConfig, model: TModel) { - super({}, model) + super(config, model) this.client = createMistralClient(config) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 93 - 101, The constructor in MistralTextAdapter wrongly overrides an already-declared kind and discards the passed config by calling super({}, model); update the class to remove the redundant readonly kind declaration (since BaseTextAdapter provides it) and pass the incoming config into the base constructor (call super(config, model)) so TextAdapterConfig (timeouts/headers/etc.) is preserved; keep the client initialization via createMistralClient(config) and ensure the private client: Mistral and readonly name = 'mistral' remain unchanged.
282-338: Tool-call args are serialized twice per delta.
JSON.stringify(toolCallDelta.function.arguments)runs at Line 308 to accumulate, and again at Line 328 to emitTOOL_CALL_ARGS. For object-valued delta args this is wasted work on the hot streaming path, and — more importantly — the first tool-call chunk (whentoolCall.startedflips from false to true inside this same iteration) won't emitTOOL_CALL_ARGSon that first pass because thetoolCall.startedcheck at Line 324 is evaluated beforestartedis set at Line 313… wait, actuallystartedis set at 313 before 324 is evaluated, so the first chunk's args do get emitted. Still, the double stringify is worth consolidating.♻️ Compute `argsDelta` once per delta
- if (toolCallDelta.function.arguments !== undefined) { - const argsDelta = - typeof toolCallDelta.function.arguments === 'string' - ? toolCallDelta.function.arguments - : JSON.stringify(toolCallDelta.function.arguments) - toolCall.arguments += argsDelta - } + const argsDelta = + toolCallDelta.function.arguments === undefined + ? undefined + : typeof toolCallDelta.function.arguments === 'string' + ? toolCallDelta.function.arguments + : JSON.stringify(toolCallDelta.function.arguments) + if (argsDelta !== undefined) { + toolCall.arguments += argsDelta + } if (toolCall.id && toolCall.name && !toolCall.started) { toolCall.started = true yield { /* TOOL_CALL_START */ ... } } - if (toolCallDelta.function.arguments !== undefined && toolCall.started) { - const argsDelta = - typeof toolCallDelta.function.arguments === 'string' - ? toolCallDelta.function.arguments - : JSON.stringify(toolCallDelta.function.arguments) + if (argsDelta !== undefined && toolCall.started) { yield { type: 'TOOL_CALL_ARGS', ... delta: argsDelta, } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 282 - 338, The loop over deltaToolCalls double-serializes object args; compute the argument delta once per iteration and reuse it for both accumulating into toolCallsInProgress and for emitting the TOOL_CALL_ARGS event. Inside the for-loop handling toolCallDelta (referencing toolCallDelta, toolCallsInProgress, toolCall, TOOL_CALL_START and TOOL_CALL_ARGS), when toolCallDelta.function.arguments !== undefined, create a single const argsDelta = typeof ... ? ... : JSON.stringify(...), use argsDelta to append to toolCall.arguments, and use the same argsDelta in the yield for the TOOL_CALL_ARGS event instead of re-running JSON.stringify.packages/typescript/ai-mistral/src/message-types.ts (1)
141-149:schemaDefinitionshould be required forResponseFormatJsonSchema.The adapter's
structuredOutputalways setsschemaDefinitionandstrict(text.ts Line 173-176), and a JSON-schema response format without a schema is meaningless. MakingschemaDefinitionrequired reflects actual usage and catches call sites that forget it.export interface ResponseFormatJsonSchema { type: 'json_schema' jsonSchema: { name: string description?: string - schemaDefinition?: { [key: string]: unknown } + schemaDefinition: { [key: string]: unknown } strict?: boolean } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/message-types.ts` around lines 141 - 149, The ResponseFormatJsonSchema interface defines schemaDefinition as optional but the adapter (structuredOutput in text.ts) always provides schemaDefinition and strict; make schemaDefinition required to match actual usage by changing ResponseFormatJsonSchema so jsonSchema.schemaDefinition is non-optional, update any types/usages that expect it optional, and run typechecks to fix call sites that now must pass schemaDefinition explicitly (look for ResponseFormatJsonSchema, structuredOutput, and any callers constructing json_schema responses).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-mistral/package.json`:
- Around line 15-20: Add explicit /adapters subpath exports so consumers can
import tree-shakeable adapters instead of the whole package; update the
"exports" object to include entries like "./adapters/text" that map "types" to
"./dist/esm/adapters/text.d.ts" and "import" to "./dist/esm/adapters/text.js"
(and add any other planned adapter entrypoints similarly), ensuring the entry
names match the built files under dist/esm and follow the existing "./": {
"types", "import" } shape used for ".".
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 146-150: Remove the leftover console.* debug logging in the
adapter (e.g., the console.error calls inside the chatStream error handler and
the other similar spots around lines shown); instead rely on emitting the
existing RUN_ERROR event with the error payload (error.message/code) so errors
are propagated without writing to stdout/stderr. Locate the error handlers in
the chatStream function and the other two instances and delete the
console.error/console.log statements, ensuring only the RUN_ERROR emission (or
existing telemetry call) remains.
- Around line 116-151: The catch in chatStream currently yields a RUN_ERROR and
then swallows the exception; update chatStream (the try/catch around
this.client.chat.stream(...) and the catch block) to rethrow the original error
after emitting RUN_ERROR so callers receive a rejection; specifically keep the
hasEmittedRunStarted logic and the RUN_STARTED/RUN_ERROR yields as-is, then add
a `throw err` (rethrow the same Error & { code?: string } captured as `err`) at
the end of that catch block so consumers iterating chatStream observe the
failure; alternatively, if you prefer a single error surface, remove the
internal try/catch inside processMistralStreamChunks so errors bubble to
chatStream instead (choose one approach and implement consistently).
- Around line 340-403: The adapter can misreport computedFinishReason as
'tool_calls' based solely on toolCallsInProgress.size; add a boolean flag (e.g.,
hasEmittedToolCall) initialized false and set it true whenever you yield a
TOOL_CALL_END (inside the loop that inspects toolCallsInProgress), then compute
computedFinishReason using (choice.finishReason === 'tool_calls' ||
hasEmittedToolCall) instead of checking toolCallsInProgress.size > 0 so that
RUN_FINISHED only reports 'tool_calls' when a TOOL_CALL_END was actually
emitted.
- Around line 440-489: mapTextOptionsToMistral currently validates modelOptions
via validateTextProviderOptions but never includes them in the returned payload,
dropping all Mistral-specific fields; modify mapTextOptionsToMistral to merge
the validated modelOptions into the returned object (after converting camelCase
keys to the snake_case names expected by the Mistral SDK) so fields like
response_format, frequency_penalty, presence_penalty, stop, random_seed,
safe_prompt, tool_choice, parallel_tool_calls, n, prediction, top_p and
max_tokens are forwarded; keep existing behavior for tools
(convertToolsToProviderFormat) and messages, and ensure the merged object still
matches the expected return shape of mapTextOptionsToMistral and is used by
chatStream/structuredOutput callers.
- Around line 117-119: Replace the unsafe "as any" casts on the request
parameter objects passed to this.client.chat.stream and
this.client.chat.complete with proper inferred parameter types so SDK signature
changes surface at compile time; specifically, type the stream request as
Parameters<typeof this.client.chat.stream>[0] and the complete request as
Parameters<typeof this.client.chat.complete>[0] (remove the casts at the stream
and complete call sites), while still using your custom message-types where
appropriate and mapping them into these strongly-typed parameter objects.
In `@packages/typescript/ai-mistral/src/message-types.ts`:
- Around line 151-183: Replace the empty interface declarations that currently
make `{}` overly permissive — specifically MistralTextMetadata,
MistralAudioMetadata, MistralVideoMetadata, and MistralDocumentMetadata — with a
truly empty-object type such as Record<string, never> (or add a private
placeholder field) so consumers of MistralMessageMetadataByModality get proper
type safety; update the four interface declarations to use Record<string, never>
(or include a clearly named placeholder property) to prevent values like 42 or
string from type-checking as valid metadata.
In `@packages/typescript/ai-mistral/src/model-meta.ts`:
- Around line 33-47: The MISTRAL_LARGE_LATEST model meta has outdated context
window and pricing; update the constant MISTRAL_LARGE_LATEST (the object
satisfying ModelMeta<MistralTextProviderOptions>) so context_window is 256_000
and set pricing.input.normal to 0.5 and pricing.output.normal to 1.5 to match
current Mistral specs; keep the other fields (name, max_completion_tokens,
supports) unchanged and ensure the object still uses "as const satisfies
ModelMeta<MistralTextProviderOptions>".
In `@packages/typescript/ai-mistral/src/tools/function-tool.ts`:
- Around line 17-32: The code mutates caller-provided tool.inputSchema via
inputSchema.properties = {}, which can leak state across uses; instead, create a
non-mutating copy before modifying: build a new schema object (e.g., copy
inputSchema and set properties to inputSchema.properties ?? {}) and pass that
copy into makeMistralStructuredOutputCompatible, then set
jsonSchema.additionalProperties = false; reference symbols: tool.inputSchema,
inputSchema, makeMistralStructuredOutputCompatible, jsonSchema.
In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Around line 51-53: The generateId function produces a short, variable-length,
collision-prone suffix by using Math.random().toString(36).substring(7); update
generateId to produce a stable, high-entropy ID: either replace the random tail
with crypto.randomUUID() (available in Node 14.17+/browsers) and format as
`${prefix}-${crypto.randomUUID()}`, or if you must keep Math.random(), use a
fixed-length slice such as Math.random().toString(36).substring(2,10) (or
repeat/concat to reach desired length) so the suffix is deterministic in length;
locate and update the generateId function in client.ts accordingly.
In `@packages/typescript/ai-mistral/src/utils/schema-converter.ts`:
- Around line 53-83: The loop in makeMistralStructuredOutputCompatible recurses
into object/array properties but skips adding 'null' for optional nested fields;
update the branches handling prop.type === 'object' and prop.type === 'array'
(and their assignments to properties[propName]) so that after you recurse you
also apply the same nullability logic used in the else branch when wasOptional
is true: if the resulting property has a single string type, convert it to a
union [type, 'null']; if it already has an array type and doesn't include
'null', append 'null'. Ensure this is applied for the object branch (after
calling makeMistralStructuredOutputCompatible) and the array branch (after
wrapping items) while preserving existing type arrays.
In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts`:
- Around line 371-374: The test currently narrows runFinishedChunk with
chunks.find((c) => c.type === 'RUN_FINISHED') then checks
runFinishedChunk.finishReason, which can silently skip assertions if
runFinishedChunk is undefined; add an explicit assertion
expect(runFinishedChunk).toBeDefined() immediately after computing
runFinishedChunk (before the if / property access) to ensure the test fails when
no RUN_FINISHED chunk is emitted and then assert
expect(runFinishedChunk!.finishReason).toBe('tool_calls').
In `@packages/typescript/ai-mistral/vite.config.ts`:
- Around line 5-27: The Vitest test block inside the defineConfig object in
vite.config.ts is redundant with the dedicated vitest.config.ts and should be
removed: delete the entire test: { ... } block (the object assigned under config
in the defineConfig call) so vite.config.ts only contains non-test Vite config.
Also update any test include patterns to follow project guidelines by using
src/**/*.test.ts (adjust references that previously used tests/**/*.test.ts) and
move existing test files from tests/ into the corresponding src/ locations;
ensure vitest.config.ts remains the single source of truth for test
configuration.
In `@packages/typescript/ai-mistral/vitest.config.ts`:
- Around line 1-22: Remove the redundant vitest.config.ts file (which exports
defineConfig / default config) and consolidate its test block into the existing
vite.config.ts by keeping the test configuration (globals, environment, include,
coverage) only in vite.config.ts; delete the vitest.config.ts file so Vitest
will use the configuration from vite.config.ts like other adapter packages.
---
Nitpick comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 93-101: The constructor in MistralTextAdapter wrongly overrides an
already-declared kind and discards the passed config by calling super({},
model); update the class to remove the redundant readonly kind declaration
(since BaseTextAdapter provides it) and pass the incoming config into the base
constructor (call super(config, model)) so TextAdapterConfig
(timeouts/headers/etc.) is preserved; keep the client initialization via
createMistralClient(config) and ensure the private client: Mistral and readonly
name = 'mistral' remain unchanged.
- Around line 282-338: The loop over deltaToolCalls double-serializes object
args; compute the argument delta once per iteration and reuse it for both
accumulating into toolCallsInProgress and for emitting the TOOL_CALL_ARGS event.
Inside the for-loop handling toolCallDelta (referencing toolCallDelta,
toolCallsInProgress, toolCall, TOOL_CALL_START and TOOL_CALL_ARGS), when
toolCallDelta.function.arguments !== undefined, create a single const argsDelta
= typeof ... ? ... : JSON.stringify(...), use argsDelta to append to
toolCall.arguments, and use the same argsDelta in the yield for the
TOOL_CALL_ARGS event instead of re-running JSON.stringify.
In `@packages/typescript/ai-mistral/src/message-types.ts`:
- Around line 141-149: The ResponseFormatJsonSchema interface defines
schemaDefinition as optional but the adapter (structuredOutput in text.ts)
always provides schemaDefinition and strict; make schemaDefinition required to
match actual usage by changing ResponseFormatJsonSchema so
jsonSchema.schemaDefinition is non-optional, update any types/usages that expect
it optional, and run typechecks to fix call sites that now must pass
schemaDefinition explicitly (look for ResponseFormatJsonSchema,
structuredOutput, and any callers constructing json_schema responses).
In `@packages/typescript/ai-mistral/src/model-meta.ts`:
- Around line 33-207: Replace the many per-model constants (e.g.,
MISTRAL_LARGE_LATEST, MISTRAL_MEDIUM_LATEST, ... OPEN_MISTRAL_NEMO) and the
parallel collections (MISTRAL_CHAT_MODELS, MistralModelInputModalitiesByName,
MistralChatModelProviderOptionsByName) with a single data-driven map like
MISTRAL_MODELS typed as const satisfies Record<string,
ModelMeta<MistralTextProviderOptions>>; then derive the chat model list and the
modality/provider-option maps from that single MISTRAL_MODELS object so
adding/removing a model only requires updating one place and keeps types (and
unions) inferred from the map.
- Around line 251-261: The current MistralChatModelProviderOptionsByName maps
every entry of MISTRAL_CHAT_MODELS to the same MistralTextProviderOptions,
making ResolveProviderOptions<TModel> redundant; replace the indirection by
changing ResolveProviderOptions to simply be MistralTextProviderOptions (and
remove or deprecate MistralChatModelProviderOptionsByName), i.e., eliminate the
map and export ResolveProviderOptions = MistralTextProviderOptions so the code
uses a single concrete provider options type while preserving the ability to
reintroduce per-model mappings later if needed (refer to the symbols
MistralChatModelProviderOptionsByName, ResolveProviderOptions,
MistralTextProviderOptions, and MISTRAL_CHAT_MODELS to locate and update the
types).
In `@packages/typescript/ai-mistral/src/text/text-provider-options.ts`:
- Around line 110-114: The validator validateTextProviderOptions currently does
nothing; add fast client-side checks inside that function (referencing
validateTextProviderOptions and type InternalTextProviderOptions) to throw clear
errors for obvious misconfigurations: ensure numeric counts like n are integers
>= 1, temperature is within the allowed range (e.g. 0–1 or your provider's
accepted bounds), messages (or prompt) is present and messages.length > 0, and
any other obvious bounds (e.g. max_tokens/top_p if present) are within valid
ranges; throw descriptive Error instances so callers fail fast instead of
waiting for the Mistral API round-trip.
In `@packages/typescript/ai-mistral/src/tools/function-tool.ts`:
- Around line 27-42: Remove the redundant assignment to
jsonSchema.additionalProperties = false in the function-tool return path: the
helper makeMistralStructuredOutputCompatible already sets additionalProperties =
false for object schemas (see makeMistralStructuredOutputCompatible), and
setting it here can incorrectly apply to non-object types; simply delete the
line that assigns additionalProperties and leave the returned object (type:
'function', function: { name: tool.name, description: tool.description,
parameters: jsonSchema, strict: true }) as-is (it remains compatible with
FunctionTool/FunctionParameters).
In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Around line 17-24: The createMistralClient function currently uses truthy
checks which drop intentional values like timeoutMs: 0 and possibly empty
serverURL; update createMistralClient to explicitly check for undefined (e.g.,
timeoutMs !== undefined and serverURL !== undefined) when spreading options so
zero or empty-string values are preserved, and also ensure any other SDK options
present on MistralClientConfig are forwarded into the new Mistral({...}) call
(refer to createMistralClient and the Mistral constructor to add those
properties).
In `@packages/typescript/ai-mistral/src/utils/schema-converter.ts`:
- Around line 8-29: transformNullsToUndefined currently treats any typeof ===
'object' as a plain object and will rebuild Dates, Maps, Sets and class
instances via Object.entries; add a plain-object guard (e.g. check
Object.getPrototypeOf(obj) === Object.prototype or an isPlainObject helper)
before iterating entries and only recurse/construct a Record for true plain
objects, otherwise return the original object unchanged, and add a short comment
explaining the guard to prevent mangling Date/Map/Set/class instances; reference
the transformNullsToUndefined function and the block that uses
Object.entries(obj as Record<string, unknown>) for where to apply the change.
In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts`:
- Around line 378-406: The test manually constructs an async iterable named
errorIterable with pre-wrapped streamChunks, which is inconsistent with other
tests; replace the manual async iterable with a call to
setupMockStream(streamChunks) so the helper performs the required .data wrapping
and error behavior, updating references to errorIterable and reusing the
existing setupMockStream helper used elsewhere in this test suite.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4902c3d3-c076-442e-b548-0ec58eee37bf
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (18)
.changeset/add-ai-mistral.mdpackages/typescript/ai-mistral/README.mdpackages/typescript/ai-mistral/package.jsonpackages/typescript/ai-mistral/src/adapters/text.tspackages/typescript/ai-mistral/src/index.tspackages/typescript/ai-mistral/src/message-types.tspackages/typescript/ai-mistral/src/model-meta.tspackages/typescript/ai-mistral/src/text/text-provider-options.tspackages/typescript/ai-mistral/src/tools/function-tool.tspackages/typescript/ai-mistral/src/tools/index.tspackages/typescript/ai-mistral/src/tools/tool-converter.tspackages/typescript/ai-mistral/src/utils/client.tspackages/typescript/ai-mistral/src/utils/index.tspackages/typescript/ai-mistral/src/utils/schema-converter.tspackages/typescript/ai-mistral/tests/mistral-adapter.test.tspackages/typescript/ai-mistral/tsconfig.jsonpackages/typescript/ai-mistral/vite.config.tspackages/typescript/ai-mistral/vitest.config.ts
| const config = defineConfig({ | ||
| test: { | ||
| name: packageJson.name, | ||
| dir: './', | ||
| watch: false, | ||
| globals: true, | ||
| environment: 'node', | ||
| include: ['tests/**/*.test.ts'], | ||
| coverage: { | ||
| provider: 'v8', | ||
| reporter: ['text', 'json', 'html', 'lcov'], | ||
| exclude: [ | ||
| 'node_modules/', | ||
| 'dist/', | ||
| 'tests/', | ||
| '**/*.test.ts', | ||
| '**/*.config.ts', | ||
| '**/types.ts', | ||
| ], | ||
| include: ['src/**/*.ts'], | ||
| }, | ||
| }, | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
fd -t f 'vitest.config.ts|vite.config.ts' packages/typescript/ai-mistral
cat packages/typescript/ai-mistral/vitest.config.ts 2>/dev/null || trueRepository: TanStack/ai
Length of output: 609
Remove duplicate Vitest configuration and align test location with project guidelines.
Both vite.config.ts and vitest.config.ts contain overlapping test configuration. The dedicated vitest.config.ts is the proper entry point for Vitest; the test block in vite.config.ts is redundant and should be removed. Additionally, the test include pattern uses tests/**/*.test.ts, but project guidelines require unit tests to be placed alongside source code at src/**/*.test.ts. Update the include pattern and move existing test files to match this structure.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-mistral/vite.config.ts` around lines 5 - 27, The
Vitest test block inside the defineConfig object in vite.config.ts is redundant
with the dedicated vitest.config.ts and should be removed: delete the entire
test: { ... } block (the object assigned under config in the defineConfig call)
so vite.config.ts only contains non-test Vite config. Also update any test
include patterns to follow project guidelines by using src/**/*.test.ts (adjust
references that previously used tests/**/*.test.ts) and move existing test files
from tests/ into the corresponding src/ locations; ensure vitest.config.ts
remains the single source of truth for test configuration.
| import { defineConfig } from 'vitest/config' | ||
|
|
||
| export default defineConfig({ | ||
| test: { | ||
| globals: true, | ||
| environment: 'node', | ||
| include: ['tests/**/*.test.ts'], | ||
| coverage: { | ||
| provider: 'v8', | ||
| reporter: ['text', 'json', 'html', 'lcov'], | ||
| exclude: [ | ||
| 'node_modules/', | ||
| 'dist/', | ||
| 'tests/', | ||
| '**/*.test.ts', | ||
| '**/*.config.ts', | ||
| '**/types.ts', | ||
| ], | ||
| include: ['src/**/*.ts'], | ||
| }, | ||
| }, | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
fd -t f 'vite(st)?\.config\.ts' packages/typescript | sort
for f in $(fd -t f 'vite(st)?\.config\.ts' packages/typescript); do
echo "=== $f ==="
cat "$f"
doneRepository: TanStack/ai
Length of output: 32275
Remove the redundant vitest.config.ts file.
The vitest.config.ts duplicates the test configuration already present in vite.config.ts. The test block in vite.config.ts (globals, environment, include, coverage) is identical to the standalone vitest.config.ts. Since Vitest prefers vitest.config.* when present, the simpler approach is to consolidate test configuration into vite.config.ts only and remove the redundant file, matching the pattern used by other adapter packages (ai-code-mode, ai-gemini, ai-groq, ai-openai, etc.).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-mistral/vitest.config.ts` around lines 1 - 22, Remove
the redundant vitest.config.ts file (which exports defineConfig / default
config) and consolidate its test block into the existing vite.config.ts by
keeping the test configuration (globals, environment, include, coverage) only in
vite.config.ts; delete the vitest.config.ts file so Vitest will use the
configuration from vite.config.ts like other adapter packages.
|
Would you mind adding this to the e2e test suite as an additional provider, I don't feel comfortable merging new adapters if they don't pass our e2e suite! |
Good shout! Will do :) |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
packages/typescript/ai-mistral/src/adapters/text.ts (1)
226-255:⚠️ Potential issue | 🟡 MinorError-handling asymmetry between
chatStream's outer catch andprocessMistralStreamChunks.Pre-stream failures (e.g.,
fetchRawMistralStreamthrowing before yielding on bad request/401) flow through this outer catch, emitRUN_ERROR, and thenthrow errat line 253 — consumers must handle rejection. Mid-stream failures, however, are swallowed by the innertry/catchinprocessMistralStreamChunks(line 502-515), which yieldsRUN_ERRORwithout rethrowing, so the iterator completes normally. Consumers therefore cannot rely onfor await … of chatStream(...)rejecting or not rejecting consistently on failure.Pick one policy and apply uniformly: either remove the inner catch and let all errors bubble to this outer handler, or have the inner catch also rethrow after yielding
RUN_ERROR.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 226 - 255, The error handling is inconsistent: pre-stream errors are caught by the outer catch in chatStream and rethrown, while processMistralStreamChunks currently catches mid-stream errors and only yields RUN_ERROR without rethrowing; update processMistralStreamChunks so that its inner catch (the try/catch inside processMistralStreamChunks) still yields the RUN_ERROR event like it does now but then rethrows the error (throw err) after yielding, so all errors uniformly propagate to the outer handler (and consumers of chatStream) — locate processMistralStreamChunks and add a rethrow in its catch block after yielding the RUN_ERROR.
🧹 Nitpick comments (6)
packages/typescript/ai-mistral/tests/mistral-adapter.test.ts (1)
90-136: Minor: nomockComplete/fetchreset between tests in this block.Tests in
describe('Text adapter')don't invoke streaming, so the missingbeforeEachreset is harmless today — but if someone later adds a streaming assertion here without noticing the outer block has no reset, priorvi.stubGlobal('fetch', …)state could leak. Adding abeforeEach(() => { vi.clearAllMocks() })here mirrors theMistral AG-UI event emissionblock and is defensive.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts` around lines 90 - 136, Add a beforeEach to the "Text adapter" describe block to clear any mocked globals or fetch state to prevent test leakage: inside the describe('Text adapter') block add beforeEach(() => { vi.clearAllMocks() }) (or vi.resetAllMocks()/vi.restoreAllMocks() if you prefer) to mirror the Mistral AG-UI event emission block and ensure prior vi.stubGlobal('fetch', ...) or mockComplete state cannot leak into these tests.packages/typescript/ai-mistral/src/adapters/text.ts (2)
36-36: Array-type style violations flagged by ESLint.
ChatCompletionMessageParam[]/unknown[]should use theArray<T>form used throughout this file. Static analysis flags both on lines 36 and 126.🧹 Proposed change
-function messagesToSnakeCase(messages: ChatCompletionMessageParam[]): unknown[] { +function messagesToSnakeCase( + messages: Array<ChatCompletionMessageParam>, +): Array<unknown> { @@ - messages: ChatCompletionMessageParam[] + messages: Array<ChatCompletionMessageParam>Also applies to: 126-126
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` at line 36, ESLint flags array-type style violations: replace the bracket-style types with the Array<T> form in this file; specifically change the function signature of messagesToSnakeCase from using ChatCompletionMessageParam[] and unknown[] to use Array<ChatCompletionMessageParam> and Array<unknown>, and likewise update the other occurrence around the code referenced at line 126 to use Array<...> instead of the bracket notation so the file consistently uses Array<T>.
148-148: Empty interface that onlyextends.
export interface MistralTextConfig extends MistralClientConfig {}adds no members and will trip@typescript-eslint/no-empty-object-type. Prefer atypealias so the intent (public re-export under a more specific name) is explicit.🧹 Proposed change
-/** - * Configuration for Mistral text adapter. - */ -export interface MistralTextConfig extends MistralClientConfig {} +/** + * Configuration for Mistral text adapter. + */ +export type MistralTextConfig = MistralClientConfig🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` at line 148, The exported empty interface MistralTextConfig that only extends MistralClientConfig should be replaced with a type alias to avoid the `@typescript-eslint/no-empty-object-type` lint rule; change the declaration for MistralTextConfig to a type alias (e.g., export type MistralTextConfig = MistralClientConfig) so the file still publicly re-exports the same shape without an empty interface.packages/typescript/ai-mistral/src/utils/client.ts (1)
46-62: Browserwindow.envfallback usesas anyand reads a non-standard global.
(globalThis as any).window?.envis not a standard browser surface — it only works if a build step injectswindow.env(e.g., some custom Vite/CRA setups). In a plain browser,window.envisundefinedand you fall through toprocess.env, which doesn't exist, so the error message (“Please set it in your environment variables…”) is thrown in a context where “environment variables” isn't meaningful. Consider either documenting this convention or narrowing the fallback to Node-only, and typing the env lookup withoutas any(e.g., via(globalThis as { window?: { env?: Record<string, string> } }).window?.env).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/utils/client.ts` around lines 46 - 62, The getMistralApiKeyFromEnv function currently uses an untyped (globalThis as any).window?.env fallback and then tries process.env, which misbehaves in plain browsers; change the lookup to be Node-only by checking typeof process !== 'undefined' && typeof process.env !== 'undefined' first and read process.env.MISTRAL_API_KEY there, or if you must support injected window.env explicitly cast globalThis to a safe type like (globalThis as { window?: { env?: Record<string,string> } }) and read window.env.MISTRAL_API_KEY; also update the thrown message around MISTRAL_API_KEY to not instruct browser users to set "environment variables" (clarify Node vs injected window.env) so the error is accurate for the execution environment.testing/e2e/tests/test-matrix.ts (1)
14-140: Drift risk: this support matrix fully duplicatestesting/e2e/src/lib/feature-support.ts.Not introduced by this PR, but now that two files need
'mistral'added in lockstep (and 11 individual features each), the duplication is a concrete maintenance hazard — a future provider or feature change will silently skew between them. Consider importing/reusing the matrix fromfeature-support.tsin a follow-up so there is a single source of truth.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/tests/test-matrix.ts` around lines 14 - 140, The file defines a local supportMatrix that duplicates the canonical matrix in feature-support.ts; remove the local supportMatrix and instead import and use the single exported support matrix from feature-support.ts (the module that currently holds the source-of-truth matrix), keeping the same type Record<Feature, Set<Provider>> and replacing all references to the local supportMatrix with the imported symbol (e.g., supportMatrix or getFeatureSupport) so there is one source of truth for provider/feature support.packages/typescript/ai-mistral/src/model-meta.ts (1)
248-269: Per-model provider options map is effectively uniform — intentional?
MistralChatModelProviderOptionsByNamemaps every model to the sameMistralTextProviderOptions, soResolveProviderOptions<TModel>always yields the same type. That still satisfies the tree-shakeable per-model typing mechanism, but if you plan to surface capability differences later (e.g., vision-only fields onpixtral-*, reasoning-specific options onmagistral-*, or prohibitingtoolson models that lack it), consider branching this map now so it actually narrows per model.As per coding guidelines: "Provide type safety per model by using model-specific provider options in adapter packages".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/model-meta.ts` around lines 248 - 269, MistralChatModelProviderOptionsByName currently maps every model key to the same MistralTextProviderOptions so ResolveProviderOptions<TModel> never narrows per-model; update the map to assign model-specific option types (e.g., map pixtral-* to a MistralVisionProviderOptions, magistral-* to a MistralReasoningProviderOptions, or specific entries for each (typeof MISTRAL_CHAT_MODELS)[number]) so ResolveProviderOptions will resolve to the correct per-model type; adjust or add the new provider option types and ensure ResolveProviderOptions and any uses still fall back to MistralTextProviderOptions for unknown models.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 527-531: The current assembly of serverURL in the adapter
unconditionally appends "/v1/chat/completions" and will produce double "/v1/v1"
if a user passes a base that already includes a version segment; update the
logic around config.serverURL (used to build serverURL and url) to normalize and
strip any trailing "/v1" or "/v1/" (in addition to trailing slash) before
appending "/v1/chat/completions", so both bare bases and bases that already
include the version produce a correct single "/v1" in the final url.
- Around line 523-616: Add a brief inline comment next to the Mistral SDK client
initialization (the client created around line 209) stating that the SDK client
is intentionally retained because it's required by the structuredOutput logic
(see structuredOutput function around line 272) and is not used for streaming
paths; mention that e2e tests route Mistral through llmock via providers.ts
(serverURL: base) so the custom SSE streaming path remains covered, to prevent
future contributors from removing the client when refactoring streaming code.
---
Duplicate comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 226-255: The error handling is inconsistent: pre-stream errors are
caught by the outer catch in chatStream and rethrown, while
processMistralStreamChunks currently catches mid-stream errors and only yields
RUN_ERROR without rethrowing; update processMistralStreamChunks so that its
inner catch (the try/catch inside processMistralStreamChunks) still yields the
RUN_ERROR event like it does now but then rethrows the error (throw err) after
yielding, so all errors uniformly propagate to the outer handler (and consumers
of chatStream) — locate processMistralStreamChunks and add a rethrow in its
catch block after yielding the RUN_ERROR.
---
Nitpick comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Line 36: ESLint flags array-type style violations: replace the bracket-style
types with the Array<T> form in this file; specifically change the function
signature of messagesToSnakeCase from using ChatCompletionMessageParam[] and
unknown[] to use Array<ChatCompletionMessageParam> and Array<unknown>, and
likewise update the other occurrence around the code referenced at line 126 to
use Array<...> instead of the bracket notation so the file consistently uses
Array<T>.
- Line 148: The exported empty interface MistralTextConfig that only extends
MistralClientConfig should be replaced with a type alias to avoid the
`@typescript-eslint/no-empty-object-type` lint rule; change the declaration for
MistralTextConfig to a type alias (e.g., export type MistralTextConfig =
MistralClientConfig) so the file still publicly re-exports the same shape
without an empty interface.
In `@packages/typescript/ai-mistral/src/model-meta.ts`:
- Around line 248-269: MistralChatModelProviderOptionsByName currently maps
every model key to the same MistralTextProviderOptions so
ResolveProviderOptions<TModel> never narrows per-model; update the map to assign
model-specific option types (e.g., map pixtral-* to a
MistralVisionProviderOptions, magistral-* to a MistralReasoningProviderOptions,
or specific entries for each (typeof MISTRAL_CHAT_MODELS)[number]) so
ResolveProviderOptions will resolve to the correct per-model type; adjust or add
the new provider option types and ensure ResolveProviderOptions and any uses
still fall back to MistralTextProviderOptions for unknown models.
In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Around line 46-62: The getMistralApiKeyFromEnv function currently uses an
untyped (globalThis as any).window?.env fallback and then tries process.env,
which misbehaves in plain browsers; change the lookup to be Node-only by
checking typeof process !== 'undefined' && typeof process.env !== 'undefined'
first and read process.env.MISTRAL_API_KEY there, or if you must support
injected window.env explicitly cast globalThis to a safe type like (globalThis
as { window?: { env?: Record<string,string> } }) and read
window.env.MISTRAL_API_KEY; also update the thrown message around
MISTRAL_API_KEY to not instruct browser users to set "environment variables"
(clarify Node vs injected window.env) so the error is accurate for the execution
environment.
In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts`:
- Around line 90-136: Add a beforeEach to the "Text adapter" describe block to
clear any mocked globals or fetch state to prevent test leakage: inside the
describe('Text adapter') block add beforeEach(() => { vi.clearAllMocks() }) (or
vi.resetAllMocks()/vi.restoreAllMocks() if you prefer) to mirror the Mistral
AG-UI event emission block and ensure prior vi.stubGlobal('fetch', ...) or
mockComplete state cannot leak into these tests.
In `@testing/e2e/tests/test-matrix.ts`:
- Around line 14-140: The file defines a local supportMatrix that duplicates the
canonical matrix in feature-support.ts; remove the local supportMatrix and
instead import and use the single exported support matrix from
feature-support.ts (the module that currently holds the source-of-truth matrix),
keeping the same type Record<Feature, Set<Provider>> and replacing all
references to the local supportMatrix with the imported symbol (e.g.,
supportMatrix or getFeatureSupport) so there is one source of truth for
provider/feature support.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3653946e-37c6-4081-89cf-360a99c9e041
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (13)
packages/typescript/ai-mistral/package.jsonpackages/typescript/ai-mistral/src/adapters/text.tspackages/typescript/ai-mistral/src/message-types.tspackages/typescript/ai-mistral/src/model-meta.tspackages/typescript/ai-mistral/src/tools/function-tool.tspackages/typescript/ai-mistral/src/utils/client.tspackages/typescript/ai-mistral/src/utils/schema-converter.tspackages/typescript/ai-mistral/tests/mistral-adapter.test.tstesting/e2e/package.jsontesting/e2e/src/lib/feature-support.tstesting/e2e/src/lib/providers.tstesting/e2e/src/lib/types.tstesting/e2e/tests/test-matrix.ts
✅ Files skipped from review due to trivial changes (3)
- testing/e2e/package.json
- testing/e2e/src/lib/types.ts
- packages/typescript/ai-mistral/package.json
🚧 Files skipped from review as they are similar to previous changes (3)
- packages/typescript/ai-mistral/src/tools/function-tool.ts
- packages/typescript/ai-mistral/src/utils/schema-converter.ts
- packages/typescript/ai-mistral/src/message-types.ts
| private async *fetchRawMistralStream( | ||
| params: RawStreamParams, | ||
| config: MistralClientConfig, | ||
| ): AsyncGenerator<MistralStreamEvent> { | ||
| const serverURL = (config.serverURL ?? 'https://api.mistral.ai').replace( | ||
| /\/$/, | ||
| '', | ||
| ) | ||
| const url = `${serverURL}/v1/chat/completions` | ||
|
|
||
| const { | ||
| stream: _stream, | ||
| messages, | ||
| maxTokens, | ||
| topP, | ||
| randomSeed, | ||
| responseFormat, | ||
| toolChoice, | ||
| parallelToolCalls, | ||
| frequencyPenalty, | ||
| presencePenalty, | ||
| safePrompt, | ||
| ...rest | ||
| } = params | ||
|
|
||
| const body: Record<string, unknown> = { | ||
| ...rest, | ||
| messages: messagesToSnakeCase(messages), | ||
| stream: true, | ||
| ...(maxTokens != null && { max_tokens: maxTokens }), | ||
| ...(topP != null && { top_p: topP }), | ||
| ...(randomSeed != null && { random_seed: randomSeed }), | ||
| ...(responseFormat != null && { response_format: responseFormat }), | ||
| ...(toolChoice != null && { tool_choice: toolChoice }), | ||
| ...(parallelToolCalls != null && { | ||
| parallel_tool_calls: parallelToolCalls, | ||
| }), | ||
| ...(frequencyPenalty != null && { | ||
| frequency_penalty: frequencyPenalty, | ||
| }), | ||
| ...(presencePenalty != null && { | ||
| presence_penalty: presencePenalty, | ||
| }), | ||
| ...(safePrompt != null && { safe_prompt: safePrompt }), | ||
| } | ||
|
|
||
| const headers: Record<string, string> = { | ||
| 'Content-Type': 'application/json', | ||
| Authorization: `Bearer ${config.apiKey}`, | ||
| ...config.defaultHeaders, | ||
| } | ||
|
|
||
| const response = await fetch(url, { | ||
| method: 'POST', | ||
| headers, | ||
| body: JSON.stringify(body), | ||
| }) | ||
|
|
||
| if (!response.ok || !response.body) { | ||
| const errorText = await response.text() | ||
| throw new Error(`Mistral API error ${response.status}: ${errorText}`) | ||
| } | ||
|
|
||
| const reader = response.body.getReader() | ||
| const decoder = new TextDecoder() | ||
| let buffer = '' | ||
|
|
||
| try { | ||
| while (true) { | ||
| const { done, value } = await reader.read() | ||
| if (done) break | ||
|
|
||
| buffer += decoder.decode(value, { stream: true }) | ||
| const lines = buffer.split('\n') | ||
| buffer = lines.pop()! | ||
|
|
||
| for (const line of lines) { | ||
| const trimmed = line.trim() | ||
| if (!trimmed.startsWith('data:')) continue | ||
| const data = trimmed.slice(5).trimStart() | ||
| if (data === '[DONE]') return | ||
|
|
||
| try { | ||
| const raw = JSON.parse(data) as Record<string, unknown> | ||
| yield { data: rawChunkToCamelCase(raw) } | ||
| } catch { | ||
| // skip malformed chunks | ||
| } | ||
| } | ||
| } | ||
| } finally { | ||
| reader.releaseLock() | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm e2e tests reference mistral and exercise streaming/tool paths
rg -nP --type=ts -C2 '\bmistral\b' testing/e2e/tests
rg -nP --type=ts 'chatStream|TOOL_CALL_' packages/typescript/ai-mistral/srcRepository: TanStack/ai
Length of output: 3781
🏁 Script executed:
# Check the actual e2e test file structure and implementation
find testing/e2e -name "*.ts" -type f | head -20Repository: TanStack/ai
Length of output: 763
🏁 Script executed:
# Look at the specific e2e test file structure
ls -la testing/e2e/tests/Repository: TanStack/ai
Length of output: 2091
🏁 Script executed:
# Check if there are comments in the test.ts file around mistral streaming
rg -nP --type=ts -A5 -B5 'mistral.*stream|streaming.*mistral' testing/e2e/testsRepository: TanStack/ai
Length of output: 37
🏁 Script executed:
# Check lines around 209 and 523-616 in the text.ts file to see existing comments
sed -n '205,230p' packages/typescript/ai-mistral/src/adapters/text.ts
sed -n '515,530p' packages/typescript/ai-mistral/src/adapters/text.tsRepository: TanStack/ai
Length of output: 1379
🏁 Script executed:
# Search for llmock or mock server references in e2e tests
rg -nP --type=ts 'llmock|mock.*server|mockServer' testing/e2eRepository: TanStack/ai
Length of output: 2077
🏁 Script executed:
# Check the e2e test implementations for tool-calling and parallel-tool-calls
cat testing/e2e/tests/tool-calling.spec.ts
cat testing/e2e/tests/parallel-tool-calls.spec.ts
cat testing/e2e/tests/summarize-stream.spec.tsRepository: TanStack/ai
Length of output: 2803
🏁 Script executed:
# Check how the e2e framework calls these tests and whether it configures mistral
head -50 testing/e2e/tests/helpers.tsRepository: TanStack/ai
Length of output: 1649
🏁 Script executed:
# Check server-functions.ts to see how API calls are made
sed -n '1,50p' testing/e2e/src/lib/server-functions.tsRepository: TanStack/ai
Length of output: 1321
🏁 Script executed:
# Check llmock-server configuration to see if mistral is explicitly configured
cat testing/e2e/src/lib/llmock-server.tsRepository: TanStack/ai
Length of output: 942
🏁 Script executed:
# Check if there are any comments around line 209 in text.ts explaining SDK client usage
sed -n '205,220p' packages/typescript/ai-mistral/src/adapters/text.tsRepository: TanStack/ai
Length of output: 529
🏁 Script executed:
# Check for any comments in the custom SSE method and beyond
sed -n '520,545p' packages/typescript/ai-mistral/src/adapters/text.tsRepository: TanStack/ai
Length of output: 760
🏁 Script executed:
# Check how providers.ts configures mistral serverURL when aimockPort is passed
cat testing/e2e/src/lib/providers.tsRepository: TanStack/ai
Length of output: 3571
🏁 Script executed:
# Check if there's mistral-specific configuration in llmock-server or anywhere else
rg -nP --type=ts 'mistral.*serverURL|mistral.*api|upstreamProviders' testing/e2e/srcRepository: TanStack/ai
Length of output: 121
🏁 Script executed:
# Check if mistral is configured in feature-support.ts
cat testing/e2e/src/lib/feature-support.tsRepository: TanStack/ai
Length of output: 2876
🏁 Script executed:
# Check if structuredOutput uses the SDK client
rg -nP --type=ts -B5 -A10 'structuredOutput' packages/typescript/ai-mistral/src/adapters/text.ts | head -50Repository: TanStack/ai
Length of output: 599
🏁 Script executed:
# Check if mistral fixtures exist (recorded responses)
find testing/e2e/fixtures -name "*mistral*" -o -name "*mistral*" 2>/dev/null | head -20Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
# Check the structure of fixtures directory
ls -la testing/e2e/fixtures/recorded/ 2>/dev/null | head -30Repository: TanStack/ai
Length of output: 223
🏁 Script executed:
# Check how structuredOutput uses the SDK client more thoroughly
sed -n '255,290p' packages/typescript/ai-mistral/src/adapters/text.tsRepository: TanStack/ai
Length of output: 1192
🏁 Script executed:
# Search for chat.complete usage to understand SDK client usage
rg -nP --type=ts 'chat\.complete|this\.client' packages/typescript/ai-mistral/src/adapters/text.tsRepository: TanStack/ai
Length of output: 232
Add comment at line 209 explaining why the SDK client is retained.
The e2e test matrix confirms mistral is exercised across streaming scenarios (tool-calling, parallel-tool-calls, summarize-stream), and providers.ts correctly configures mistral with serverURL: base to route through llmock — so the custom SSE path is tested. The SDK client (line 209) is correctly used only in structuredOutput (line 272), not in streaming. However, add a brief comment near the client initialization explaining that it's needed for structuredOutput to prevent future contributors from removing it when refactoring streaming logic.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 523 - 616,
Add a brief inline comment next to the Mistral SDK client initialization (the
client created around line 209) stating that the SDK client is intentionally
retained because it's required by the structuredOutput logic (see
structuredOutput function around line 272) and is not used for streaming paths;
mention that e2e tests route Mistral through llmock via providers.ts (serverURL:
base) so the custom SSE streaming path remains covered, to prevent future
contributors from removing the client when refactoring streaming code.
| const serverURL = (config.serverURL ?? 'https://api.mistral.ai').replace( | ||
| /\/$/, | ||
| '', | ||
| ) | ||
| const url = `${serverURL}/v1/chat/completions` |
There was a problem hiding this comment.
serverURL path assembly will double-prefix /v1 if a user passes a base with a version segment.
const serverURL = (config.serverURL ?? 'https://api.mistral.ai').replace(/\/$/, '') strips only a trailing slash, then unconditionally appends /v1/chat/completions. If a user follows the pattern they likely copy from OpenAI-style adapters and passes e.g. https://my.proxy/mistral/v1 (or a gateway that already routes on /v1), the resulting URL becomes …/v1/v1/chat/completions and requests 404 silently at runtime. The e2e call site (testing/e2e/src/lib/providers.ts line 100) happens to pass a bare base so tests won't catch this.
Consider stripping a trailing /v1 as well, or documenting explicitly that serverURL must not include a version segment.
🛠 Proposed fix
- const serverURL = (config.serverURL ?? 'https://api.mistral.ai').replace(
- /\/$/,
- '',
- )
- const url = `${serverURL}/v1/chat/completions`
+ const serverURL = (config.serverURL ?? 'https://api.mistral.ai')
+ .replace(/\/$/, '')
+ .replace(/\/v1$/, '')
+ const url = `${serverURL}/v1/chat/completions`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 527 - 531,
The current assembly of serverURL in the adapter unconditionally appends
"/v1/chat/completions" and will produce double "/v1/v1" if a user passes a base
that already includes a version segment; update the logic around
config.serverURL (used to build serverURL and url) to normalize and strip any
trailing "/v1" or "/v1/" (in addition to trailing slash) before appending
"/v1/chat/completions", so both bare bases and bases that already include the
version produce a correct single "/v1" in the final url.
🎯 Changes
We're working with Mistral's tooling but love the ai package and therefore I added an adapter for Mistral's API using their client package.
✅ Checklist
the contributing.md file doesn't exist anymore :(
pnpm run test:pr.🚀 Release Impact
Summary by CodeRabbit