I'm curious if it's feasible to set up Azure API Management (APIM) in front of an Anthropic model hosted on Foundry. I've had success with using APIM alongside OpenAI models, but I'm running into issues like "not supported" or "resource not found" when attempting to connect it with the Claude model. What parameters or settings do I need to adjust to make this work? Thanks in advance for any help!
3 Answers
Looks like someone has some deep pockets!
I encountered similar issues when routing Anthropic through APIM. The problem boils down to Claude's API using a different path and auth header compared to OpenAI. The default OpenAI policy in APIM tries to rewrite the path to `/openai/deployments/...`, which doesn't exist in this case. What worked for us was creating a new API definition in APIM that targets the AI Foundry endpoint directly. You need to set the backend URL to the full inference endpoint (like `https://your-foundry.services.ai.azure.com/models/chat/completions`). Plus, use a set-header policy to send the api-key instead of a bearer token, and make sure to remove any OpenAI-specific path rewrites. The "not supported" error might indicate that APIM is sending OpenAI's version query parameter, so adding a set-query-parameter policy to eliminate that can help.
I appreciate your input! I'll definitely check those settings.
Getting it to work is definitely possible, but there are some key differences between Anthropic and OpenAI endpoints. APIM expects certain URL paths and headers, which is often the reason for the "resource not found" error. Make sure that your model is actually deployed on your Foundry instance, as that can easily be overlooked. Some users even opt to manage routing outside of APIM, using a simple proxy or serverless function instead of trying to mimic OpenAI's setup directly.
Thanks for the insight! I'll dive into that.

Thanks a lot! I tried it but didn't have success yet—maybe I messed up somewhere. Did you set the API Definition base to HTTP or Foundry? 🙂