feature/extend AWS Bedrock node with full model catalog and custom model support#6309
Conversation
…stom model support
There was a problem hiding this comment.
Code Review
This pull request significantly enhances the AWS Bedrock integration by consolidating support for built-in, imported, fine-tuned, and provisioned-throughput models into a single node. Key improvements include runtime inference profile discovery, automated request format detection for imported models, and robust error normalization. Feedback identifies several areas for refinement: correcting typos in the model catalog, adjusting validation logic to allow geo-prefixed inference profile IDs, ensuring that input values of zero for temperature and tokens are not incorrectly defaulted, and adding safety checks for nullish response bodies from the AWS SDK.
| const credentialConfig = await getAWSCredentialConfig(nodeData, options, iRegion) | ||
|
|
||
| const effectiveModel = endpointMigratedArn || customModel | ||
| if (effectiveModel && !effectiveModel.startsWith('arn:aws:bedrock:')) { |
There was a problem hiding this comment.
This check prevents the use of geo-prefixed inference profile IDs (e.g., us.anthropic...), which are explicitly handled in the resolveBedrockModel utility (utils.ts:310). This makes that logic unreachable for the customModel input. If geo-prefixed IDs are intended to be supported, the validation should be updated to allow them.
There was a problem hiding this comment.
The ARN-only validation on customModel is intentional — the field is labeled "Custom Model ARN" and only accepts arn:aws:bedrock:... values (imported models, provisioned throughput, custom model deployments, inference profile ARNs).
Geo-prefixed inference profile IDs (e.g., us.anthropic.claude-sonnet-4-6) are not user-facing inputs — they're auto-applied internally by resolveBedrockModel() based on the model's inference_profile_geos in models.json and the selected region. The GEO_PREFIX_RE branch at utils.ts:310 is reachable through that auto-application path, not through customModel input.
For built-in models, users use the "Model Name" dropdown + "Region" dropdown and the correct profile is applied automatically.
fixed typo Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…BedrockImported.ts Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
thanks @Ankit5467 ! with this changes, will this still works? |
…m-models # Conflicts: # packages/components/models.json # packages/components/nodes/chatmodels/AWSBedrock/AWSChatBedrock.ts # pnpm-lock.yaml
Yes @HenryHengZJ . # 6309 is built on top of # 5731's merged state. We import getAWSCredentialConfig() from awsToolsUtils.ts (added by # 5731) and pass the resolved credentials through all new code paths: imported model info lookup, inference profile discovery, and the BedrockImportedChat constructor. AssumeRole works end-to-end for both built-in and custom model invocations. |
Expand the awsChatBedrock node from limited Claude support to all 63 active
Bedrock models across 16 AWS regions, with automatic inference profile routing.
Add support for imported, fine-tuned, and provisioned-throughput models via a
single "Custom Model ARN" field — the node auto-detects the model type and
routes to the correct API (Converse or InvokeModel) without user configuration.
Built-in models:
Custom/imported models:
Error handling: