Skip to main content
Portkey supports the AWS Bedrock CountTokens API, which returns the token count for a given inference request. This helps you estimate token usage before sending requests to foundation models, allowing you to estimate costs, optimize prompts to fit within token limits, and plan for token usage in your applications. The token count returned by this operation matches the token count that would be charged if the same input were sent to the model in an InvokeModel or Converse request.

Using Count Tokens with Portkey

Portkey supports Bedrock’s CountTokens endpoint through the Anthropic-compatible format, allowing you to use the same API signature across Bedrock, Vertex AI, and Anthropic providers.
import anthropic

client = anthropic.Anthropic(
    api_key="dummy", # we will use portkey's provider slug
    default_headers={"x-portkey-api-key": "YOUR_PORTKEY_API_KEY"},
    base_url="https://api.portkey.ai/v1"
)

response = client.messages.count_tokens(
    model="@your-bedrock-provider-slug/your-model-name",
    system="You are a scientist",
    messages=[{
        "role": "user",
        "content": "Hello, Claude"
    }],
)

print(response.json())

Use Cases

  • Estimate costs before sending inference requests
  • Optimize prompts to fit within token limits
  • Plan for token usage in your applications