AI

Discovery vs Invention: What Ask-AI’s Founder Learned Watching the AI Revolution From Inside Academia

Ask-AI’s founder watched scientists surprised by their own AI models during his PhD. Why treating AI as discovery vs invention changes everything about go-to-market timing.

Written By: Brett

0

Discovery vs Invention: What Ask-AI’s Founder Learned Watching the AI Revolution From Inside Academia

Discovery vs Invention: What Ask-AI’s Founder Learned Watching the AI Revolution From Inside Academia

The scientists who built BERT were surprised by what it could do. The researchers training GPT-1 didn’t predict ChatGPT. Even the people creating these models were discovering capabilities they hadn’t designed. Alon Talmor watched this happen in real-time during his PhD between 2016 and 2020—and it fundamentally changed how he thinks about timing product launches in AI.

In a recent episode of Category Visionaries, Alon Talmor, CEO and Founder of Ask-AI, explained why treating generative AI as a discovery rather than an invention changes everything about go-to-market strategy.

The Scientists Were Surprised Too

Most founders assume the researchers building AI knew what they were creating. Alon’s PhD gave him a different perspective. “Generative AI is not an invention, it’s more of a discovery. Like the fact that we can produce these models and they know to do what they do. We didn’t anticipate that.”

During his PhD focused on reasoning for question answering, Alon worked alongside researchers as language models evolved. The pattern: researchers would train a model expecting certain capabilities, then discover it could do things they hadn’t explicitly programmed.

This matters for founders because you can’t build a product roadmap the way you would for predictable technology. You’re not implementing features you know will work. You’re positioning yourself to capture capabilities as they’re discovered.

Invention: Predictable. Discovery: Unpredictable.

The distinction shapes timing. When building on inventions, the path is clear. Internal combustion engines, microprocessors, earlier software—these had predictable improvement curves.

Discoveries work differently. Generative AI’s capabilities emerged from scaling transformers, but specific abilities—few-shot learning, chain-of-thought reasoning, instruction following—weren’t predictable from first principles.

Alon experienced this during his PhD. “We were astounded to see this whole revolution unfolding.”

The revolution wasn’t following a roadmap. It was unfolding as researchers discovered what was possible.

What This Means for Go-to-Market Timing

When Ask-AI launched in 2020, Alon couldn’t predict exactly when ChatGPT would make generative AI mainstream. But he understood something crucial about discovery-based technology: “We already realized that something big is going to happen industry, not just in academia. I think a few early generative AI companies, like OpenAI itself, realized that, and we just didn’t realize how huge it’s going to be.”

The timing strategy for discoveries isn’t about predicting the exact breakthrough moment. It’s about positioning where breakthroughs will matter when they happen.

Three implications change how founders should think about launches:

Launch before the capabilities are fully proven. With inventions, you wait until the technology is ready. With discoveries, you position yourself where the capabilities are emerging, even if you don’t know exactly what’s coming. Ask-AI launched in 2020 building on language models that were primitive compared to GPT-4. But they were positioned to upgrade as capabilities were discovered.

Build platforms over point solutions. When you can’t predict exactly what capabilities will emerge, building narrow solutions is risky. The specialized tool you build today might be made obsolete by an unexpected capability discovered tomorrow. Ask-AI chose the platform path deliberately.

“We’re actually building an enterprise AI platform starting from customer support that in our vision, eventually would disrupt SaaS deeply. We feel that AI would pretty much make SaaS dead and consolidate many of the SaaS solutions, including the system of record.”

The platform strategy works for discoveries because it can absorb new capabilities as they emerge without requiring complete product pivots.

Expect the timeline to surprise you. Alon’s candid admission reveals how even insiders got timing wrong. “It wasn’t surprising that ChatGPT was such a strong model. What was surprising is how fast it came and how explosive the revolution is. I think we thought it’s going to take longer to get there.”

If the researchers and early companies building in the space miscalculated timing by years, founders can’t expect to nail it precisely. The strategy becomes: be positioned early enough that you’re ready whenever the breakthrough happens.

The Risk of Invention-Based Planning

Most AI founders are planning as if they’re building inventions. They create detailed roadmaps showing exactly what features will ship when. They predict with confidence how capabilities will improve. They time their Series A around specific milestones they’re certain they’ll hit.

This approach works for invention-based technology. It fails for discovery-based breakthroughs.

The AI founders who’ve struggled are often those who built narrow solutions assuming capabilities would stay relatively static. They optimized for GPT-3 level capabilities, then found themselves obsolete when GPT-4 emerged. They built specialized tools for specific use cases, then watched as foundation models generalized to handle those cases directly.

Alon’s observation about scientists being surprised applies to founders too. If the people creating the technology couldn’t predict what it would do, founders building on top of it certainly can’t.

How to Position for Discoveries

The framework Alon’s experience suggests:

Embed yourself where discoveries happen. His PhD wasn’t just credentials—it was positioning. When Ask-AI launched, he understood the trajectory even if he couldn’t predict exact timing.

Build for flexibility. The product architecture needs to absorb unpredictable capabilities as they emerge. Ask-AI’s platform approach allows them to integrate new AI capabilities without rebuilding.

Focus on problems that transcend specific capabilities. Customer support, enterprise intelligence—these problems matter regardless of exactly how AI solves them.

Don’t optimize too early. When capabilities are being discovered, optimizing for current performance means building for yesterday’s capabilities.

The Ask-AI Vision Through Discovery Lens

Alon’s vision reveals discovery-based thinking. “One AI assistant to roll them all. You would just have your assistant to do mostly anything.”

This isn’t a feature roadmap—it’s positioning for a future where AI capabilities continue being discovered. “You think about opening a lot of tabs and doing things in different tabs, but you need to ask yourself, how do I get to this point? Why do you have so many tabs?”

The question frames the problem at a level that transcends specific AI capabilities. Whether it’s GPT-5 or some other architecture, the core problem—too much app switching—remains.

For founders building on breakthrough technologies: if even the scientists are surprised by what their models can do, you can’t plan like you know what’s coming. Position where discoveries will matter. Build platforms that can absorb new capabilities. Focus on problems that transcend specific implementations.

Most importantly, recognize when you’re riding a discovery rather than building an invention. The playbooks are completely different.