Skip to content

Amazon Bedrock

Coralogix's AI Observability integrations enable organizations to gain deep insight into their AI applications, helping them monitor, analyze, and optimize performance across the stack. Through integrations with Amazon Bedrock, Coralogix delivers end-to-end visibility into AI workloads, supporting proactive issue detection and efficient performance tuning.

Overview

This library offers customized OpenTelemetry instrumentation for Bedrock SDK, optimized to support large language model (LLM) application development with streamlined integration, detailed production tracing, and effective debugging capabilities.

Requirements

  • Python version 3.8 and above.
  • Coralogix API keys.

Installation

Run the following command.

pip install llm-tracekit[bedrock]

Authentication

Authentication data is passed during OTel Span Exporter definition:

  1. Select the endpoint associated with your Coralogix domain .
  2. Use your customized API key in the authorization request header.
  3. Provide the application and subsystem names.
from llm_tracekit import setup_export_to_coralogix

setup_export_to_coralogix(
    coralogix_token=<your_coralogix_token>,
    coralogix_endpoint=<your_coralogix_endpoint>,
    service_name="ai-service",
    application_name="ai-application",
    subsystem_name="ai-subsystem",
    capture_content=True,
)

Note

All of the authentication parameters can also be provided through environment variables (CX_TOKEN, CX_ENDPOINT, etc.).

Usage

This section describes how to set up instrumentation for Amazon Bedrock.

Set up tracing

Automatic

Use the setup_export_to_coralogix function to set up tracing and export traces to Coralogix. See the code snippet in the Authentication section.

Manual

Alternatively, you can set up tracing manually.

from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor

tracer_provider = TracerProvider(
    resource=Resource.create({SERVICE_NAME: "ai-service"}),
)
exporter = OTLPSpanExporter()
span_processor = SimpleSpanProcessor(exporter)
tracer_provider.add_span_processor(span_processor)
trace.set_tracer_provider(tracer_provider)

Instrument

To instrument all clients, call the instrument method.

from llm_tracekit import BedrockInstrumentor

BedrockInstrumentor().instrument()

Uninstrument

To uninstrument clients, call the uninstrument method.

BedrockInstrumentor().uninstrument() 

Full example

from llm_tracekit import BedrockInstrumentor, setup_export_to_coralogix
import boto3

# Optional: Configure sending spans to Coralogix
# Reads Coralogix connection details from the following environment variables:
# - CX_TOKEN
# - CX_ENDPOINT
setup_export_to_coralogix(
    service_name="ai-service",
    application_name="ai-application",
    subsystem_name="ai-subsystem",
    capture_content=True,
)

# Activate instrumentation
BedrockInstrumentor().instrument()

# Example Bedrock Usage
client = boto3.client("bedrock-runtime")
result = client.converse(
    modelId="anthropic.claude-3-5-sonnet-20240620-v1:0",
    system=[{"text": "you are a helpful assistant"}],
    messages=[{"role": "user", "content": [{"text": "Write a short poem on open telemetry."}]}],
    inferenceConfig={
        "maxTokens": 300,
        "temperature": 0,
        "topP": 1,
    }
)

Enable message content capture

By default, message content, such as the contents of the prompt, completion, function arguments and return values, are not captured. To capture message content as span attributes, you can either:

  • Pass capture_content=True when calling setup_export_to_coralogix, or

  • Set the environment variable OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true.

Most Coralogix AI evaluations require message contents to function properly, so enabling message capture is strongly recommended.

Key difference from OpenTelemetry

User prompts and model responses are captured as span attributes instead of log events, as detailed below.

Semantic conventions

Bedrock-specific attributes

AttributeTypeDescriptionExample
gen_ai.prompt.<message_number>.rolestringRole of message author for user message system, user, assistant, tool
gen_ai.prompt.<message_number>.contentstringContents of user message What's the weather in Paris?
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.idstringID of tool call in user message call_O8NOz8VlxosSASEsOY7LDUcP
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.typestringType of tool call in user message function
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.namestringThe name of the function used in tool call within user message get_current_weather
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.argumentsstringArguments passed to the function used in tool call within user message {"location": "Seattle, WA"}
gen_ai.prompt.<message_number>.tool_call_idstringTool call ID in user message call_mszuSIzqtI65i1wAUOE8w5H4
gen_ai.completion.<choice_number>.rolestringRole of message author for choice in model response assistant
gen_ai.completion.<choice_number>.finish_reasonstringFinish reason for choice in model response stop, tool_calls, error
gen_ai.completion.<choice_number>.contentstringContents of choice in model response The weather in Paris is rainy and overcast, with temperatures around 57°F
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number >.idstringID of tool call in choice call_O8NOz8VlxosSASEsOY7LDUcP
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number >.typestringType of tool call in choice function
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number >.function.namestringThe name of the function used in tool call within choice get_current_weather
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number >.function.argumentsstringArguments passed to the function used in tool call within choice {"location": "Seattle, WA"}
AttributeTypeDescriptionExample
gen_ai.bedrock.agent_alias.idstringThe ID of the agent-alias in an invoke_agent calluser@company.com
gen_ai.bedrock.request.tools.<tool_number>.function.namestringThe name of the function to use in tool callsget_current_weather
gen_ai.bedrock.request.tools.<tool_number>.function.descriptionstringDescription of the functionGet the current weather in a given location
gen_ai.bedrock.request.tools.<tool_number>.function.parametersstringJSON describing the schema of the function parameters{"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}}, "required": ["location"]}

Supported endpoints

The library supports the following endpoints:

  • invoke_model – Compatible with LLaMA and Anthropic Claude for standard model invocation.

  • invoke_model_with_response_stream – Enables streaming responses from LLaMA and Anthropic Claude.

  • converse – Facilitates stateful interactions with LLMs.

  • converse_stream – Supports streaming for stateful conversation sessions.

  • invoke_agent - Executes agent-based workflows.