AI Safety Research

Building Culturally-Aware AI Safety Frameworks

While most AI safety research focuses on universal principles, we're tackling the critical gap: How can AI be truly safe when deployed across diverse cultural contexts?

Publications

Our research contributions to culturally-aware AI

Position Paper October 2025

Beyond Language: Reframing LLMs in Africa Through Contextual Grounding

Authors: Gilbert Kiplangat Korir, Caroline Heta, Alison Okatch, Ibrahim Fadhili, Irene Jebet Korir, Moses Muiruri Njau, Asbel Rotich Kibet

This position paper reframes how AI development in Africa is approached. Current efforts emphasize language-inclusion training for Large Language Models (LLMs) to speak African languages such as Swahili, Yoruba, and Zulu. However, linguistic inclusion alone does not ensure contextual understanding. We propose a multidimensional Framework of African Contextual Dimensions—cultural-linguistic, socioeconomic, historical-political, and epistemic—to guide the design of contextually grounded AI systems.

African NLP Contextual AI Cultural Alignment Ubuntu Ethics

More Publications Coming Soon

We're actively working on multiple research tracks and will publish our findings as they mature.

Research Focus

We're pioneering Constitutional AI for Cultural Safety—building AI systems that understand cultural context as a first-class requirement.

Cultural-Grounded Constitutions

Developing the AkiliX Constitution—a safety framework for East African contexts with context-aware refusal mechanisms and culturally-appropriate harm prevention.

Alignment for Regional Values

Exploring collective vs individual safety paradigms, researching community-harm prevention, and developing cultural value embeddings based on Ubuntu principles.

Contextual Safety Benchmarks

Building East African safety evaluation datasets, developing cultural bias detection tools, and creating context-aware red teaming frameworks.

Why This Research Matters

Safety is contextual. A model that's safe in San Francisco may be dangerously naive in Nairobi.

The Problem

Well-funded research for Western contexts

Extensive publications & frameworks

Almost zero work on African cultural safety

No context-aware alignment research

Missing cultural harm prevention

AI safety frameworks that don't understand context

Our Opportunity

1

First-Mover Advantage

in cultural AI safety research

2

Real-World Impact

immediately deployable solutions

3

Global Relevance

solving problems affecting 1.4 billion people

Our Research Approach

How we build culturally-aware AI safety

Empirical First

Build, test, iterate—then publish

Context-Grounded

Research driven by real African use cases

Open Development

Sharing learnings as we build

Join Our Research Community

We're building in the open and seeking research partners to embed African context into every stage of the AI pipeline—from dataset creation to deployment.

Join the Movement