| Participation considerations from adversely impacted groups protected classes in model design and testing: |
None |
| Bias Metric (If Measured): |
BBQ Accuracy Scores in Ambiguous Contexts |
| Which characteristic (feature) show(s) the greatest difference in performance?: |
The model shows high variance in the characteristics when it is used with a high temperature. |
| Which feature(s) have the worst performance overall? |
Physical Appearance |
| Measures taken to mitigate against unwanted bias: |
Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) employed to calibrate the model’s reasoning capabilities to maintain logical consistency and appropriate complexity when interacting with or interpreting data from diverse age demographics. |
| If using internal data, description of methods implemented in data acquisition or processing, if any, to address the prevalence of identifiable biases in the training, testing, and validation data: |
The training datasets contain a large amount of synthetic data generated by LLMs. We manually curated prompts. |
| Tools used to assess statistical imbalances and highlight patterns that may introduce bias into AI models: |
BBQ |
| Tools used to assess statistical imbalances and highlight patterns that may introduce bias into AI models: |
These datasets, such as web-scraped finance reasoning data, do not collectively or exhaustively represent all demographic groups (and proportionally therein). For instance, these datasets do not contain explicit mentions of the following classes: age, gender, or ethnicity in approximately 97% to 99% of samples. Finance reasoning data scraped from SEC EDGAR contained a notable representational skew where ethnicity mentions are dominated by Middle Eastern contexts (found in finance documents), while gender is explicitly mentioned in only 0.9% of samples (including Male-only, Female-only, and Both). To mitigate these imbalances, we recommend considering these evaluation techniques such as bias audits, fine-tuning with demographically balanced datasets, and mitigation strategies such as counterfactual data augmentation to align with the desired model behavior. This evaluation used a 3,000-sample subset per dataset, identified as the optimal threshold for maximizing embedder accuracy. |
| Unwanted Bias Testing: |
Constrained to English-language inputs. Multi-lingual parity is not currently claimed or guaranteed. |