Skip to content

feature/extend AWS Bedrock node with full model catalog and custom model support#6309

Open
Ankit5467 wants to merge 7 commits intoFlowiseAI:mainfrom
Ankit5467:feature/bedrock-full-model-catalog-and-custom-models
Open

feature/extend AWS Bedrock node with full model catalog and custom model support#6309
Ankit5467 wants to merge 7 commits intoFlowiseAI:mainfrom
Ankit5467:feature/bedrock-full-model-catalog-and-custom-models

Conversation

@Ankit5467
Copy link
Copy Markdown

Expand the awsChatBedrock node from limited Claude support to all 63 active
Bedrock models across 16 AWS regions, with automatic inference profile routing.
Add support for imported, fine-tuned, and provisioned-throughput models via a
single "Custom Model ARN" field — the node auto-detects the model type and
routes to the correct API (Converse or InvokeModel) without user configuration.

Built-in models:

  • Updated models.json with 63 active models + 9 legacy models with (Legacy) label
  • Runtime inference profile discovery auto-applies correct geo prefix per region
  • StopSequences stripping for incompatible models (DeepSeek, GPT-OSS)
  • Temperature auto-retry for models that reject the temperature parameter

Custom/imported models:

  • Format auto-detection via InvokeModel probe at init time
  • Supports imported, fine-tuned (custom-model-deployment), and provisioned-throughput ARNs
  • ARN validation with actionable error messages
  • Tested with 6 imported models across 4 architectures

Error handling:

  • normalizeBedrockError() rewrites raw Bedrock errors into user-facing guidance
  • Errors propagate to UI instead of being swallowed silently

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly enhances the AWS Bedrock integration by consolidating support for built-in, imported, fine-tuned, and provisioned-throughput models into a single node. Key improvements include runtime inference profile discovery, automated request format detection for imported models, and robust error normalization. Feedback identifies several areas for refinement: correcting typos in the model catalog, adjusting validation logic to allow geo-prefixed inference profile IDs, ensuring that input values of zero for temperature and tokens are not incorrectly defaulted, and adding safety checks for nullish response bodies from the AWS SDK.

Comment thread packages/components/models.json Outdated
Comment thread packages/components/models.json Outdated
const credentialConfig = await getAWSCredentialConfig(nodeData, options, iRegion)

const effectiveModel = endpointMigratedArn || customModel
if (effectiveModel && !effectiveModel.startsWith('arn:aws:bedrock:')) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This check prevents the use of geo-prefixed inference profile IDs (e.g., us.anthropic...), which are explicitly handled in the resolveBedrockModel utility (utils.ts:310). This makes that logic unreachable for the customModel input. If geo-prefixed IDs are intended to be supported, the validation should be updated to allow them.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ARN-only validation on customModel is intentional — the field is labeled "Custom Model ARN" and only accepts arn:aws:bedrock:... values (imported models, provisioned throughput, custom model deployments, inference profile ARNs).

Geo-prefixed inference profile IDs (e.g., us.anthropic.claude-sonnet-4-6) are not user-facing inputs — they're auto-applied internally by resolveBedrockModel() based on the model's inference_profile_geos in models.json and the selected region. The GEO_PREFIX_RE branch at utils.ts:310 is reachable through that auto-application path, not through customModel input.

For built-in models, users use the "Model Name" dropdown + "Region" dropdown and the correct profile is applied automatically.

Comment thread packages/components/nodes/chatmodels/AWSBedrock/AWSChatBedrock.ts Outdated
Ankit5467 and others added 4 commits April 28, 2026 16:47
fixed typo

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…BedrockImported.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@Ankit5467
Copy link
Copy Markdown
Author

@HenryHengZJ

@HenryHengZJ
Copy link
Copy Markdown
Contributor

thanks @Ankit5467 ! with this changes, will this still works?

…m-models

# Conflicts:
#	packages/components/models.json
#	packages/components/nodes/chatmodels/AWSBedrock/AWSChatBedrock.ts
#	pnpm-lock.yaml
@Ankit5467
Copy link
Copy Markdown
Author

thanks @Ankit5467 ! with this changes, will this still works?

Yes @HenryHengZJ . # 6309 is built on top of # 5731's merged state. We import getAWSCredentialConfig() from awsToolsUtils.ts (added by # 5731) and pass the resolved credentials through all new code paths: imported model info lookup, inference profile discovery, and the BedrockImportedChat constructor. AssumeRole works end-to-end for both built-in and custom model invocations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants