Skip to main content
The Model Performance module, located under the AI Platform section in Axoma, is designed to help users evaluate, track, and manage the performance of AI models. It provides insights into how different models perform across various evaluation parameters, enabling better decision-making and optimization.
Home → Global Settings → AI Platform → Model Performance
Axoma This module serves as a centralized dashboard to:
    1. Create and manage model evaluation suits (test configurations).
    1. Compare model outputs against defined metrics.
    1. Track evaluation results and performance trends.
    1. Maintain historical performance records for analysis and improvement. The + Create Suit button (top-right corner) enables users to initiate a new model performance evaluation setup.
Upon clicking this button, users can:
    1. Define evaluation parameters and configurations.
    1. Select models for testing.
    1. Assign evaluators or data sources.
    1. Save the suit as a draft or execute it directly for results.

Functional Flow

    1. Access the Module: Navigate to Global Settings → AI Platform → Model Performance.
    1. View Evaluation Suits: Browse or search existing evaluation suits.
    1. Create New Suit: Click + Create Suit to define and configure a new evaluation setup.
    1. Run Evaluations: Execute the configured suits to assess model performance.
    1. Review Results: Analyse the displayed outcomes in the Result column.
A data scientist creates a new evaluation suit to compare multiple language models (e.g., GPT, Claude, Mistral) against a predefined dataset. After execution, the Model Performance module displays accuracy and latency results, enabling the team to select the best-performing model for production deployment.