I've got a script that extracts tables from Excel files and sends them to Claude 3.5 through AWS Bedrock for classification along with a prompt. Recently, I moved the script to AWS, but I'm getting different results: specifically, one table classifies correctly when I run it locally, while it always misclassifies on AWS, despite everything being the same. The strange part is that this misclassification happens consistently across multiple runs in the cloud. I'm curious about what could be causing this discrepancy. Are there any differences in how prompts are read on AWS compared to locally? Could the way I'm processing the table in AWS affect the outcome? I'm using a string representation of the tables for input. Any insights would be really helpful!
3 Answers
Are you using the InvokeModel or Converse API for your calls? That might affect how the model interprets and responds to your input.
I figured out my issue! I was using some special symbols in my prompt like "✓", "✗", and "→". They were being interpreted correctly in Lambda but not locally, which made my local script actually incorrect even if it returned the correct classification.
It sounds like you might be using different environments in AWS Lambda versus your local setup. Are you absolutely sure the input and processing are identical? Sometimes small differences in how data is read or transformed can lead to unexpected results. Also, keep in mind if you're using different API endpoints or settings in AWS, since those could affect the classification.
Yeah, that’s a good point. Double-check the execution environment and any dependencies you have. Even minor discrepancies can cause variances in classification.

I'm using InvokeModel. Can you tell me the biggest differences between that and Converse?