Building AI That Works for Everyone: How Startups Are Tackling Bias and Accessibility Early On

By Yagmur Es | Accessibility

AI is everywhere, but it doesn’t work equally well for everyone.

With the majority of AI ethics research still broadly dominated by Western authors (Stanford AI Index, 2023), and over 1 billion people living with disabilities (WHO, 2023), innovative AI startups, like those in our Silicon Allee portfolio, help shift tech’s center of gravity toward more inclusive, usable, and global solutions.

From language models that misunderstand accents to legal tools that miss cultural nuance, the choices made in AI development can reinforce existing inequalities. And while Big Tech often dominates conversations around ethics and responsibility, some of the most thoughtful approaches to inclusive AI are emerging from a different place: the startup world.

In this short interview, we asked two AI-powered startups in the current Silicon Allee portfolio at Fraunhofer HHI how they’re addressing two critical challenges in AI development: bias and accessibility.

Their answers reveal something hopeful: that with the right questions, tools, and collaborators, ethical innovation isn’t a blocker, it’s a foundation.

🎻 Neptune: Rethinking Music Education Through Inclusive Design

NepTune is an AI-powered music education platform designed to bring joyful, personalized learning to kids. By combining AI and motion detection technology for correcting posture and giving feedback, they’ve transformed music learning into an interactive game personalised for each student’s experience! But their commitment to accessibilityruns far deeper than sound or visuals, it’s embedded in how the system responds to different bodies, abilities, and learning styles.

“Accessibility means equal opportunity to engage, enjoy, and grow through music education regardless of age, ability, neurotype, or socioeconomic background”

Aleyna Tunca, Co-Founder & CPO @NepTune

Initially, they assumed kids and educators would prefer a mobile app. But through real-world testing in classrooms, they realized that browser access was critical (especially on aging school hardware). That pivot required technical overhaul: a lightweight, WebGL-based interface that could run on low-end devices without downloads.

One of their biggest lessons came from watching how children physically engage with instruments.

“Young violin learners with limited right-arm mobility kept triggering error messages. That forced us to rethink our model’s rigidity. We taught it to recognize a range of safe, correct variations—adding empathy into our algorithm.”

Aleyna Tunca, Co-Founder & CPO @NepTune

This kind of responsive design thinking is central to their approach. To refine how their product responds to users in real time, they teamed up with the Motion Capture team at Fraunhofer HHI.

Motor learning algorithms work best when they can differentiate between acceptable movement variations and actual errors. A review of current approaches shows that successful systems explicitly model these variations rather than treating them as mistakes (Caramiaux et al., 2020).

Instead of relying on assumptions or commercial motion data, they’ve been able to test and adjust their technology using precise, research-grade tools. That means faster iteration, better calibration, and ultimately, a product that feels more intuitive and responsive. We took a closer look at this collaboration in this article.

NepTune regularly runs play tests with diverse learners and interviews inclusive classroom educators. Feedback around accessibility or fairness is not only welcomed, it’s structurally prioritized.

“We’re doing our job when a child smiles because they realize they’re actually getting better at playing. That’s what tech for good really looks like to us.”

Aleyna Tunca, Co-Founder & CPO @NepTune

⚖️ Anita: Making AI in Legal Tech Understandable and Trustworthy

Anita is building explainable AI tools that help legal professionals understand and trust algorithmic decisions. Their machine learning–powered tools streamline time-consuming legal tasks like document review and research, giving lawyers more time to focus on high-impact, strategic work. They are reshaping legal tech with an ambitious mission: make the law more accessible, transparent, and reliable through explainable AI. At its core is the belief that legal professionals, and the people they serve deserve tools they can trust.

“Tech is a tool, and good is a representation of our values,” the team explains. “For us, that means giving trustworthy answers: no fake citations, no hallucinated court decisions.”

Til Martin Bußmann-Welsch, CXO & Co-Founder of Anita

That’s where their collaboration with Fraunhofer HHI’s Explainable AI group comes in. Anita uses explainable AI techniques to show exactly how the system arrives at its conclusions: referencing original documents and enabling users to trace the logic. This is particularly crucial in law, where lawyers remain liable for what they submit.

In a landscape where 43% of AI leaders cite lack of trust/transparency concerns as a key barrier to adoption, Anita’s work on building explainable, accountable legal tech supported by Fraunhofer HHI’s AI researchers stands out as both timely and essential. (IBM Global AI Adoption Index – Enterprise Report)

By connecting directly with researchers who specialize in making AI decisions interpretable, Anita is grounding its product development in the latest thinking around transparency, bias mitigation, and model understanding. For a tool that could shape legal outcomes, that’s not a nice-to-have — it’s essential.

They also go beyond traditional accuracy metrics. Their team discovered that large language models often miss long, relevant passages unless the text is carefully chunked. That insight shaped Anita’s core architecture: ensuring long legal documents are split in a way the model can actually reason through.

Anita also emphasizes user control over AI behavior. Using techniques that direct the model toward specific data inputs (rather than relying solely on general training) they reduce hallucinations and improve factual reliability.

“If empathy were a core value, our AI’s personality would be: trustworthy. It shows its work, and it doesn’t pretend to know what it doesn’t.”

Til Martin Bußmann-Welsch, CXO & Co-Founder of Anita

For Anita, accessibility isn’t just about user interface: it’s about understandability and agency. Can the user clearly see how a conclusion was reached? Can they override it? If not, the system risks doing harm in ways that aren’t always visible.

AI doesn’t have to be a black box—or a blunt tool. As the founders in our program show, building AI with empathy, context, and accessibility at its core leads to smarter, more inclusive outcomes. Whether it’s rethinking how algorithms learn movement or making legal systems easier to navigate, these efforts remind us that thoughtful design choices can have real-world impact—especially for those often overlooked by mainstream tech.

Silicon Allee is the startup department at one of Germany’s most renowned research institutes in AI, Fraunhofer HHI. Our work supports founders who are not only building game changing deep tech, but doing so with a sharp eye on fairness, usability, and real-world impact.

These kinds of collaborations don’t happen by accident, they’re part of how we work with early-stage founders. At Silicon Allee, together with Fraunhofer HHI, we help startups like Neptune and Anita tap into real research, get hands-on support, and turn ambitious ideas into real-world impact. Discover our current portfolio: https://www.siliconallee.com/founders/#portfolio

Got something in the works?