An open-source, self-hostable AI observability platform for tracing, evaluating, and improving LLM applications at any scale.
Phoenix is open-source, built on OpenTelemetry and OpenInference. It captures detailed traces across LLM calls, retrieval, tool usage, and agent workflows vendor-agnostically. Includes LLM-based evaluators, code-based checks, human annotation, and a prompt engineering environment. Fully self-hostable on Docker/Kubernetes with no feature gates, also available as Phoenix Cloud.
Self-hosted LLM monitoring for data residency requirements
Evaluation of RAG pipelines
Cost tracking across LLM providers
Collaborative prompt engineering
Reduced cost through LLM usage visibility
Faster quality regression identification
Improved reliability via prompt testing
Reviews
Reviews are written by GCC buyers and published after moderation.
No reviews yet
Buyer reviews will appear here once published.
Primary Verticals
Integrations
Use cases
Is this your company? Claim & customize your profile
This profile was created using publicly available information.