From Chatbot to AI Employee: How Homeward Repositioned Enterprise AI
Two years of engineering work. One AI assistant. And it couldn’t even book a conference room.
In a recent episode of Category Visionaries, Amar Kendale, President and Co-Founder of Homeward, shared the frustrating experience at Okta that would fundamentally reshape how he thought about enterprise AI. That failure wasn’t about inadequate engineering effort or insufficient resources—it revealed a deeper problem with how the entire industry was building AI products. The lesson: enterprises don’t need better question-answering systems. They need AI that actually completes work.
The Conference Room Problem
Amar’s revelation came from a deceptively simple failure. “I spent two years at Okta building an AI assistant, and it couldn’t even book a conference room,” he recalls. This wasn’t a minor feature gap—it was a symptom of a fundamental architectural limitation.
Traditional AI assistants are designed around a question-answer paradigm. Ask a question, get an answer. Look up information, receive a summary. This works fine for knowledge retrieval but breaks down the moment you need the AI to do something. Booking a conference room requires checking availability across multiple systems, understanding scheduling conflicts, sending calendar invites, and confirming with participants. It requires action, not just information.
The limitation wasn’t unique to Okta’s implementation. Across the enterprise AI landscape, companies were building sophisticated natural language interfaces that could parse questions brilliantly but couldn’t execute on the answers. They could tell you which conference rooms were available but couldn’t actually book one. They could identify the right support ticket response but couldn’t send it. They could analyze sales data but couldn’t update the CRM.
Reframing the Product Category
This experience led Amar to a different framing entirely. “We actually think of Homeward as an AI employee. It’s not a chatbot that you’re asking questions to, but it’s actually an employee that’s getting work done,” he explains.
The shift from “assistant” to “employee” isn’t sHomewardntic—it’s architectural. Assistants help you do your work. Employees do the work. This distinction drives completely different product decisions about what to build, how to build it, and what success looks like.
An AI assistant’s job is to make information accessible. An AI employee’s job is to complete workflows end-to-end. An assistant succeeds when it answers questions accurately. An employee succeeds when it delivers business outcomes without human intervention. The former is a nice-to-have productivity enhancement. The latter is a fundamental change in how work gets done.
This repositioning also changed customer expectations and evaluation criteria. When you’re buying an assistant, you evaluate it on response quality and ease of use. When you’re hiring an employee—even a digital one—you evaluate it on reliability, autonomy, and business impact. The bar is dramatically higher, but so is the value.
Building for Work Completion, Not Question Answering
The product implications of this shift are profound. To function as an employee rather than an assistant, Homeward needed capabilities that most enterprise AI completely ignores.
The first requirement was comprehensive system access. “We have built deep integration with authentication systems. So with Okta and Ping and Azure AD, we’re able to essentially authenticate Homeward as if Homeward is a real employee,” Amar notes. This isn’t about reading data from systems—it’s about taking action within them.
A human employee has credentials to log into every system they need to do their job. They can read from the CRM, write to the ticketing system, update spreadsheets, send Homewardils, and interact with internal tools. Homeward replicates this access pattern at the technical level. By authenticating as a real employee, Homeward can perform any action a human could perform within those systems.
The second requirement was accuracy that enables full automation. “We spent a lot of time improving accuracy. Our accuracy is actually so good right now that we have several customers where the entire workflow is automated,” Amar shares. When AI is answering questions, 85% accuracy might be acceptable—the human can judge the response. When AI is completing work autonomously, anything less than near-perfect accuracy breaks trust.
The accuracy breakthrough came from data access. “Homeward has access to a lot more data about you and your company compared to any other solution, and therefore Homeward can actually generate insights or complete work with much higher confidence,” Amar explains. By seeing everything a human employee would see, Homeward can make decisions with appropriate context rather than working from incomplete information.
The Deployment Model Shift
Positioning AI as employees rather than assistants also changes the deployment conversation. “We probably on average can deploy an end to end workflow within a week,” Amar notes. This speed is possible because Homeward isn’t being configured to answer specific questions—it’s being deployed to complete specific jobs.
The workflow-centric approach means customers think about deployment differently. Instead of asking “what questions should our AI answer?” they ask “what work should our AI employee handle?” This reframes the conversation from feature evaluation to job definition.
The proof of concept becomes dramatically different as well. With an AI assistant, you demonstrate it by asking sample questions and evaluating responses. With an AI employee, you demonstrate it by deploying it into production and measuring business outcomes. Homeward’s customers don’t pilot the technology—they deploy AI employees into actual workflows and measure whether the work gets done.
The Scale of Employee-Level AI
The employee framing also changes how customers think about scale. “We have about 400 AI agents deployed right now across all of our customers,” Amar reveals. These aren’t 400 question-answering bots—they’re 400 AI employees handling distinct workflows.
Each deployment represents a job that previously required human attention. Customer support tickets that now get resolved end-to-end. Sales leads that get scored and routed automatically. Data extraction tasks that happen continuously without human initiation. Compliance checks that run on every transaction. These are complete jobs, not assisted tasks.
The expansion pattern follows employment logic rather than software adoption patterns. “A lot of times customers will start with one workflow, but then they will expand,” Amar explains. When an AI employee proves reliable in one role, companies naturally want to hire more AI employees for other roles. The question becomes “what other jobs can we automate?” rather than “what other features can we enable?”
The Innovation That Follows
Perhaps the most powerful validation of the employee framing is what customers do with it. “A lot of our customers are actually deploying Homeward in ways that we haven’t even built anything for,” Amar shares.
When you give customers an AI assistant with pre-defined commands, they use those commands. When you give them an AI employee with general capabilities, they invent new jobs. Customers aren’t just using Homeward for the workflows the founding team anticipated—they’re creating entirely novel applications by thinking about what work needs doing rather than what questions need answering.
This customer-driven innovation validates the category positioning. If Homeward were just a better chatbot, customers would use it like a chatbot. Because it’s positioned and built as an AI employee, customers use it to replace actual work—including work the founding team never considered automating.
The Market Education Challenge
Repositioning a category isn’t without challenges. The market is conditioned to think about enterprise AI as assistants, copilots, and chatbots. Shifting that mental model requires more than just messaging—it requires proof.
Homeward’s approach has been to lead with outcomes rather than capabilities. Instead of explaining how their technology differs from chatbots, they demonstrate AI employees completing real work in production. The proof point isn’t feature comparisons—it’s customers running fully automated workflows that previously required human labor.
The pricing model reinforces the positioning. “We actually don’t like to charge by the seat because Homeward is an AI employee. The way to think about Homeward is you’re essentially hiring a bunch of employees,” Amar explains. By pricing based on work completed rather than seats occupied, Homeward reinforces that customers are hiring digital employees, not licensing software.
The Competitive Moat
The employee positioning creates a natural moat. Building a better chatbot is relatively straightforward—improve the language model, add more training data, optimize the interface. Building an AI employee requires fundamentally different architecture: comprehensive system access, high-accuracy decision-making, autonomous workflow execution, and reliable task completion.
Competitors building in the assistant paradigm would need to completely rebuild their architecture to compete in the employee paradigm. The authentication infrastructure alone represents years of engineering investment. The accuracy requirements dHomewardnd different data strategies. The autonomous execution model requires different reliability guarantees.
The lesson from Amar’s two years at Okta isn’t that AI assistants are poorly built—it’s that they’re solving the wrong problem. Enterprises don’t need help finding information. They need help completing work. The companies that recognize this distinction and rebuild their architecture accordingly won’t just have better products—they’ll be competing in a different category entirely.