How Included AI Built Trust in AI: Their Strategy for Making Black Box Technology Transparent
In an era where AI companies often prioritize automation over transparency, Included AI has taken a fundamentally different approach. In a recent episode of Category Visionaries, CEO Raghu Gollamudi revealed their strategy for making AI trustworthy and effective in the sensitive domain of HR decisions.
The Problem with Black Box AI
When it comes to decisions about people’s careers and livelihoods, blind trust in AI isn’t good enough. Included AI recognized this early, leading them to develop a unique philosophy about AI’s role in HR. As Raghu explained, “AI should not be black box, and it needs to be explainable. So once you make it explainable, then it’s easy for people to trust.”
The Complementary Approach
Instead of trying to automate HR decisions entirely, Included AI positions their AI as a complementary tool. “What we do is it’s more like a complementary feature as opposed to automatic decision making feature,” Raghu emphasized. This approach fundamentally changes how their technology is perceived and used.
Making AI Explainable: The Technical Approach
Consider their approach to predicting employee attrition risk. “We use an AI model, an ML model, to predict that. And we also explain all the reasons why we think employee X is attrition is high,” Raghu shared. The system doesn’t just provide predictions – it explains its reasoning through multiple variables with clear weightings.
The Human-in-the-Loop Philosophy
A core principle of their approach is maintaining human agency in decision-making. “We let the HR person or the manager look at it and then make a decision on whether they want to actually change something or not,” Raghu noted. This ensures that AI enhances rather than replaces human judgment.
From Months to Minutes
The value proposition isn’t about automation – it’s about efficiency. As Raghu explained, “Previously, if somebody had to do the same work, it would take like three months just to go through 300 different dimensions and 8000 combinations. Now we are able to do the whole thing… compressed the whole thing to like five minutes.”
Real-World Application: Calibration Analysis
Their approach shines in employee calibration processes. The AI analyzes thousands of combinations to identify potential bias hotspots, but crucially, it doesn’t make automatic corrections. Instead, it presents findings for human review. In one case, this led to meaningful changes: “We found 30 hotspots where they went and actually changed a calibration rating of eight employees.”
The Audit Role
Included AI has even positioned their technology as an audit tool for other AI systems. “They use applications that have AI to make automatic decision process, like reviewing of candidates,” Raghu explained. Their system helps identify when other AIs might be making biased decisions, such as when “you’re only reviewing men for this role and your AI is not doing the right thing there.”
Building Trust Through Transparency
The company’s approach to AI transparency extends beyond technical explanations. They actively demonstrate the impact of their insights, showing how their recommendations lead to concrete improvements in workplace equity. For instance, when they identify overlooked candidates, they don’t just flag the issue – they provide actionable solutions: “Here are the women applicants that you’re not reviewed. Go review them.”
The Future Vision
This philosophy of explainable, complementary AI aligns with their larger vision. As Raghu describes it, they aim to provide “equitable experiences for every employee in an organization so that they can realize their full potential.” By making AI transparent and trustworthy, they’re building technology that enhances rather than replaces human decision-making in HR.
For other founders building AI products, Included AI’s approach offers valuable lessons about balancing automation with transparency. In sensitive domains like HR, trust isn’t just a nice-to-have – it’s essential for adoption and impact. Sometimes the most powerful AI isn’t the one that makes decisions autonomously, but the one that helps humans make better decisions themselves.