Listen Here

| |

Actionable
Takeaways

Recognize when distribution narratives mask structural incompatibility:

RapidAPI had 10 million developers and teams at 75% of Fortune 500 paying for the platform—massive distribution that theoretically fed enterprise sales. The problem: Iddo could always find anecdotes where POC teams had used RapidAPI, creating a compelling story about grassroots adoption. The critical question he should have asked earlier: "Is self-service really the driver for why we're winning deals, or is it a nice-to-have contributor?" When two businesses have fundamentally different product roadmaps, cultures, and buying journeys, distribution overlap doesn't create a sustainable single company. Stop asking if synergies exist—ask if they're causal.

Qualify on whether improvements cross phase-transition thresholds:

Datawizz disqualifies prospects who acknowledge value but lack acute pain. The diagnostic questions: "If we improved model accuracy by 20%, how impactful is that?" and "If we cut your costs 10x, what does that mean?" Companies already automating human labor often respond that inference costs are rounding errors compared to savings. The ideal customers hit differently: "We need accuracy at X% to fully automate this process and remove humans from the loop. Until then, it's just AI-assisted. Getting over that line is a step-function change in how we deploy this agent." Qualify on whether your improvement crosses a threshold that changes what's possible, not just what's better.

Use discovery to map market structure, not just validate hypotheses:

Iddo validated that the most mature companies run specialized, fine-tuned models in production. The surprise: "The chasm between them and everybody else was a lot wider than I thought." This insight reshaped their entire strategy—the tooling gap, approaches to model development, and timeline to maturity differed dramatically across segments. Most founders use discovery to confirm their assumptions. Better founders use it to understand where different cohorts sit on the maturity curve, what bridges or blocks their progression, and which segments can buy versus which need multi-year evangelism.

Target spend thresholds that indicate real commitment:

Datawizz focuses on companies spending "at a minimum five to six figures a month on AI and specifically on LLM inference, using the APIs directly"—meaning they're building on top of OpenAI/Anthropic/etc., not just using ChatGPT. This filters for companies with skin in the game. Below that threshold, AI is an experiment. Above it, unit economics and quality bars matter operationally. For infrastructure plays, find the spend level that indicates your problem is a daily operational reality, not a future consideration.

Structure discovery to extract insight, not close deals: Iddo's framework:

"If I could run [a call where] 29 of 30 minutes could be us just asking questions and learning, that would be the perfect call in my mind." He compared it to "the dentist with the probe trying to touch everything and see where it hurts." The most valuable calls weren't those that converted to POCs—they came from people who approached the problem differently or had conflicting considerations. In hot markets with abundant budgets, founders easily collect false positives by selling when they should be learning. The discipline: exhaust your question list before explaining what you build. If they don't eventually ask "What do you do?" you're not surfacing real pain.

Avoid the false-positive trap in well-funded categories: Iddo identified a specific risk in AI:

"You can very easily run these calls, you think you're doing discovery, really you're doing sales, you end up getting a bunch of POCs and maybe some paying customers. So you get really good initial signs but you've never done any actual discovery. You have all the wrong indications—you're getting a lot of false positive feedback while building the completely wrong thing." When capital is abundant and your space is hot, early revenue can mask product-market misalignment. Good initial signs aren't validation if you skipped the work to understand why people bought.

Conversation
Highlights

 

Why Your Distribution Story Might Be Capping Your Growth: Lessons from Building RapidAPI and Datawizz

Most founders can construct compelling narratives around their flywheels. Developers adopt your tool, evangelize it internally, then IT purchases the enterprise version. Product-led growth generates qualified pipeline. Distribution creates gravitational pull.

Iddo Gino built exactly that story at RapidAPI. The platform served 10 million developers with at least one team at 75% of Fortune 500 companies using and paying for it. The thesis was elegant: developers discover APIs on the public marketplace, experience the platform’s value, then champion RapidAPI Enterprise for managing their company’s internal and partner-facing API programs.

In a recent episode of BUILDERS, Iddo Gino, now Founder and CEO of Datawizz, revealed why that seemingly validated narrative “ended up putting a cap on how big we can scale the company.”

 

When Synergies Exist But Don’t Drive Outcomes

RapidAPI operated two businesses under one roof. The developer marketplace facilitated API discovery and transactions. The enterprise product provided custom API hubs for companies managing complex API programs internally.

“In theory there are a lot of synergies between those two businesses,” Iddo explained. They explicitly modeled the strategy after GitHub’s progression from GitHub.com to GitHub Enterprise.

The model exists. Companies have executed it successfully. “So the model exists, it’s well proven out, it’s not something that we invented,” Iddo noted. But proven models don’t guarantee applicability.

“I think we realized a little too late in the Rapid journey that was not the case. There were actually two very different businesses and that the buying journey didn’t actually go from one to the other.”

Here’s the insidious part: Iddo could always find supporting evidence. “You can always kind of, if you want to tell that story, you can always see like, oh, here’s this enterprise deal that we landed. And a bunch of the teams that did the POC also were signed up to rapid ahead of time. So you can kind of connect the dots and see the synergies.”

The data existed. The anecdotes validated the thesis. But he wasn’t asking the right question: “Is self-service really the driver for why we’re winning deals, or is it like a nice to have contributor to why we’re winning deals?”

That distinction—primary driver versus secondary signal—determines everything. “You kind of end up with two different businesses with very different kind of product need, product roadmap, company cultures that both live under that roof. And turns out that’s just something really hard to scale.”

 

The Pattern Hidden in AI Pitch Decks

After exiting RapidAPI, Iddo attempted retirement. Boredom won within weeks. He tried angel investing but recognized he was becoming “exactly the kind of investor that I hated as a founder who is like the, you know, still has the itch, like, is trying to backseat drive the company.”

But the deal flow exposed a systematic problem. “Almost every pitch that I saw, especially on the agent side, but even for a lot of just non agentic AI centric products had kind of two baked in assumptions to it.”

First assumption: “Hey, the thing doesn’t really work today, right? So like we have the agent, it kind of works maybe for a demo. It doesn’t really work in real life. We understand it, but we have a baked in assumption that models are going to get an order or two of magnitude better than where they are today.”

Second assumption: “We also look at the unit economics and the unit economics are completely effed up, right? Like the thing does not financially work today, but we also think inference is going to get two to three orders of manhood cheaper and then the unit economics work out.”

Iddo’s realization: “Wait, so everybody’s basically banking for agents to work and for AI to work at scale. We’re all banking on models getting two to three orders of magnitude better or maybe one to two order of magnitude better and one to two order of magnitude cheaper to run. And I just don’t see there is no data to support that. Both of these can happen simultaneously anytime in the near future.”

The math doesn’t work. “We’re probably going to get some big improvements in accuracy. And we already have, right? Gemini 3 is better than 2.5 and GPT 5 is better than 4. So we are getting improvements in accuracy, we’re maybe getting some improvement on cost, but you’re not going to get the same both of them at the same time.”

That insight became Datawizz’s foundation: infrastructure for continuous reinforcement learning that improves specialized model performance without betting on simultaneous frontier model breakthroughs and cost collapse.

 

Discovery That Maps Market Structure, Not Just Validation

Datawizz’s early GTM started with aggressive outbound discovery. “I think it’s just us like and really me initially kind of blasting on LinkedIn,” Iddo said. “I got blocked on LinkedIn like three times, but just messaging as many kind of founders, builders at different sizes of companies who are building interesting things at scale with AI.”

The approach was systematic: talk to companies running agents in serious production volume, then work backward through companies at different maturity stages—three months out, six months out, nine months out from that deployment level.

The goal wasn’t validation. It was understanding market structure. “Both on the people who are like at the largest end of the spectrum, farthest along, most mature, actually running agents with serious production volume, understanding how they’re doing it and then kind of working backwards.”

Iddo validated his core thesis: “At scale, specialized tuned, mission specific models are probably the way to go. And we actually did see that when you talk to the largest companies, like the kind of very end of the spectrum, they’re all actually running SLMs and fine tuned models in production.”

But the surprise shaped everything: “The chasm between them and kind of everybody else was a lot wider than I thought. Obviously I know that there’s going to be the people like the few companies who are like running things in production, but there was a huge gap between them and then like everybody else who is building in the tooling that’s available and what they’ve homegrown and how they approach model development.”

That gap—in infrastructure, approaches, and organizational maturity—defined the ICP strategy and timeline expectations.

 

Qualifying on Threshold Crossings, Not Incremental Value

Datawizz qualification focuses on whether improvements would cross phase-transition thresholds rather than deliver incremental value.

The diagnostic framework: “We’d actually ask this in qualifying calls, like, hey, if we could improve your model accuracy by 20%, how impactful is that to you?”

Many prospects with real AI spend respond: “Well, for what we’re using it today, it’s good enough.”

On cost: “If I could cut down your cost by 10x, how impactful would that be to you?” Common response: “We’re automating human labor. Like we’re saving so much already. The thing is a rounding error basically already compared to how much money we’re saving.”

These prospects acknowledge value but lack acute pain. They disqualify.

The ideal profile differs fundamentally. “There’s other conversations that went into where the unit economic concerns were huge or the quality issue was the difference between them being actually able to deploy an agent or not.”

Iddo elaborated on the threshold dynamic: “Any number in terms of accuracy, like we need it to be here for us to actually be able to automate the process. Until then it can just assist the humans. But once it’s here, we can actually automate the process and either accelerate delivery or reduce workload for our employees. So anything short of that is kind of the same. Like we don’t care about getting it any better in that range. We want to get it over that line and that is huge to us. Like that is a step, function change in how we can deploy this agent.”

Datawizz also filters on spend: “Looking for companies who are probably spending, you know, at a minimum five to six figures a month on AI and specifically on LM inference, obviously using the APIs directly. So actually building their own thing on top of the LM APIs.”

Below that threshold, AI remains experimental. Above it, architectural decisions and unit economics become operational imperatives worth investing to solve.

 

The 29-Minute Question Marathon

Iddo’s discovery calls follow a strict structure designed to extract insight rather than close deals. “It will basically be us asking questions until the person on the other side is like, okay, like, but what do you. Like, you really got to tell me what you guys are working on.”

The target: “If I could run, if of the 30 minutes, 29 could be US just asking questions and learning. That would be like the perfect call in my mind.”

He compared the approach to dental work: “We’re almost like the dentist with the probe who’s like trying touch everything and see where it hurts.”

Importantly, the most valuable conversations weren’t those generating pipeline. “It’s actually the ones where they think about the problem differently or they have a different kind of set of considerations about what’s good and bad. Like, those have been the most valuable ones.”

The discipline matters especially in well-funded categories. “You can very easily, like, you know, you run these calls. Well, you think you’re doing discovery, really you’re doing sales, you end up getting a bunch of like POCs and maybe some paying customers. So like you get really good initial signs but you’ve never done any actual discovery. So like you’re kind of building the wrong, you have all the wrong indications, right? Like you’re getting a lot of false positive feedback while building the completely wrong thing.”

In hot markets with abundant budgets, early revenue can obscure product-market misalignment. Good initial signs without rigorous discovery create false confidence.

 

Target Believers First, Evangelize Later

The biggest strategic shift from RapidAPI to Datawizz centers on category creation timing. “I think for the 10 million in sales, if you’re kind of, even if you’re creating a category, if you’re convincing the customer on why they need to even invest in this project, that’s a very tough hill to climb. Like people do it, but that’s a very tough hill to climb.”

Datawizz deliberately targets customers already bought into the category approach. “For us, our approach has very much been that the first set of customers are people who already are kind of bought in to the continuous learning RL kind of flywheel. And the idea that they should even deploy their own specialized models in production, so they already kind of buy into that.”

The pitch becomes execution-focused: “What we sell them on is just why they should use a platform like Data was to set that up versus building everything themselves in house.”

This isn’t permanent positioning—it’s sequencing strategy. Save category evangelism for when you have resources, proof points, and the infrastructure to support longer sales cycles. Start with customers ready to buy today based on existing beliefs.

 

The Continuous Learning Thesis

Understanding why traditional fine-tuning fails in the LLM era reveals Datawizz’s core insight. Companies approached generative AI training like traditional ML: “Let’s spend a bunch of time collecting a bunch of well labeled, well organized data. So we build the big data set, then we go and run a big fine tuning and training process with a bunch offline evaluations as part of that. Then we have a model.”

Timeline: “Between the data gathering and the training and the evals and the retraining, now we’re like two to three months later, if we’re pretty quick, there’s going to be a model that we are happy with and then we deploy that into production and we forget about it.”

The problem: “You do that. So you put two, three months into the process. You deploy a model, but then two, three months later your data changes, your prompt change, your agent change, but also the baseline models that you compare against change. So you built it in the time of GPT 3.5. Then 4.0 rolls around and completely demolishes your fine tuned models that you spend so much time and energy training and you just end up throwing that model to the trash and using 4.0.”

Iddo discovered this pattern created widespread skepticism: “There was actually a lot of hesitancy around fine tuning in SLMs. And it’s specifically a lot of hesitancy from people and companies that already invested a lot in fine tuning.”

The solution requires continuous learning infrastructure: “There has to be a flywheel where the models continuously learn and evolve because of how fast the space is moving. Because if your models are kind of petrified and frozen to a point in time, they just age very poorly. It’s like a very fast appreciating asset and usually it depreciates so fast that you never make back the time and effort and energy that you put into training yet.”

Datawizz provides the orchestration layer across the entire loop: log collection, observability, online evaluation, human validation queues, training, offline evaluation, and deployment. “Instead of you having to build an MLOps team that stitches together all these tools into a continuous learning loop, Dataways kind of gives you that loop, but lets you choose all the different tools that are going to make up that loop.”

 

The 2030 Prediction

Iddo’s thesis for the industry: “If you look at it in 2030, you probably end up seeing that, you know, 50, 60% of AI tokens end up flowing towards specialized fine tuned models. That’s probably single digit today. I think that’s going to change rapidly. That has nothing to do with us. That’s just like at this point my belief is that’s just a force of nature that will happen.”

For Datawizz: “I think if we play our game right. Those tokens are all flowing through models trained and deployed on Data Wiz. But that’s kind of ours to lose at this point.”

The deeper lesson from building two companies: distribution narratives can be both true and misleading. Synergies can exist without being causal. And the discipline to distinguish between “does this contribute to wins” and “does this drive wins” often matters more than the ability to construct compelling stories from available data.

 

Recommended Founder
Interviews

Phillip Liu

CEO of Trustero

Phillip Liu, CEO of Trustero: $8 Million Raised to Build the Future of Compliance AI

Jeong-Suh Choi

CEO of Bobidi

Jeong-Suh Choi, CEO Bobidi: Nearly $6 Million raised to Deliver Better, More Inclusive AI Solutions for Developers

Inna Sela

CEO & Founder of illumex

Inna Tokarev Sela, CEO & Founder of illumex: $13 Million Raised to Power the Future of Enterprise LLM Deployments

Amr Awadallah

CEO & Founder of Vectara

Amr Awadallah, CEO & Founder of Vectara: $53 Million Raised to Build the RAG-as-a-Service Category

Patricia Thaine

Co-Founder & CEO of Private AI

Patricia Thaine, Co-Founder & CEO of Private AI: Over $11 Million Raised to Help Organizations Anonymize PII to Achieve Regulatory Compliance

David Plon

CEO & Co-Founder of Portrait Analytics

David Plon, CEO & Co-Founder of Portrait Analytics: $10M Raised to Build an AI-Powered Thought Partner for Institutional Investors

James O’Brien

Co-Founder & COO of Ducky AI

James O’Brien, Co-Founder & COO of Ducky: $2.7 Million Raised to Build Internal Data Search for LLMs

Alan Cowen

CEO and Founder of Hume AI

Alan Cowen, CEO and Founder of Hume AI: Over $17 Million Raised to Measure and Improve How Technology Affects Human Emotion

David Reger

CEO of NEURA Robotics

David Reger, CEO of NEURA Robotics: €185M Raised to Power the Future of Cognitive Robotics

Krishna Gade

CEO & Founder of Fiddler

Krishna Gade, CEO & Founder of Fiddler: $68 Million Raised to Build the Future of AI Observability

Bob van Luijt

CEO and Co-Founder of Weaviate

Bob van Luijt, CEO and Co-Founder of Weaviate: Over $67 Million Raised to Power the Future of Vector Databases

William Gaviria Rojas

Field CTO & Co-Founder of CoactiveAI

William Gaviria Rojas, Field CTO & Co-Founder of CoactiveAI: $44 Million Raised to Build the Future of Multimodal AI Applications

Soham Mazumdar

How Wisdom AI reduces enterprise trial time-to-value from weeks to minutes | Soham Mazumdar

Deirdre Mahon

VP of Marketing of super{set}

Why Marketing Beats Sales Hiring at AI Startups

Richard Potter

Co-Founder and CEO of Peak

Richard Potter, Co-Founder and CEO at Peak: $118 Million Raised to Power the Future of AI-Enabled Businesses

Adi Bathla

CEO & Co-Founder of Revv

Adi Bathla, CEO & Co-Founder of Revv: $33 Million Raised to Power the Future of Auto Repair

Kirti Dewan

Chief Marketing Officer of Fiddler AI

How to Lead an Internal AI Workshop w/ Kirti Dewan, Chief Marketing Officer of Fiddler AI

Harry Xu

CEO and Co-Founder of Breeze ML

Harry Xu, CEO and Co-Founder of Breeze ML: $4.6 Million Raised to Create the Future of AI Governance

John Belizaire

CEO of Soluna

John Belizaire, CEO of Soluna: $180 Million Raised to Power the Future of Renewable Computing for AI

Ian Cairns

CEO & Co-Founder of Freeplay AI

How Freeplay built thought leadership by triangulating insights across hundreds of AI implementations | Ian Cairns

Gabriel Bayomi Tinoco Kalejaiye

CEO and Co-Founder of Openlayer

Gabriel Bayomi, CEO and Co-Founder of Openlayer: $4.8 Million Raised to Power the Future of Machine Learning Testing

Krish Ramineni

CEO of Fireflies

Krish Ramineni, CEO of Fireflies: $19 Million Raised to Build the Future of AI Meeting Assistants

Matt Theurer

CEO & Co-Founder of HyperSpectral

Matt Theurer, CEO & Co-Founder of HyperSpectral: $8.5 Million Raised to Power the Future of AI-Powered Spectral Intelligence

Sumanyu Sharma

CEO & Founder of Hamming AI

How Hamming AI accidentally created a new category by focusing on customer problems instead of category creation | Sumanyu Sharma ($3.8M Raised)

Ryan Alshak

CEO & Founder of Laurel

Ryan Alshak, CEO & Founder of Laurel: $55.7 Million Raised to Build the Future of Gen AI for Timekeeping

Davit Baghdasaryan

Co-Founder and CEO of Krisp

Davit Baghdasaryan, Co-Founder and CEO of Krisp: $19 Million Raised to Build the Voice Productivity Category

Sud Bhatija

Co-Founder & COO of Spot AI

Sud, Co-Founder & COO of Spot AI: $93 Million Raised to Build Video AI Agents for the Physical World

Aron Kirschen

CEO of SEMRON

Aron Kirschen, CEO of SEMRON: $7.3M Raised to Build Ultra-Efficient 3D AI Chips

Ravin Thambapillai

CEO and Co-Founder of Credal

Ravin Thambapillai, CEO and Co-Founder of Credal: $5 Million Raised to Build the Future of AI Security

Suman Kanuganti

How Personal AI scales enterprise contracts by selling to COOs and business users first | Suman Kanuganti ($16M Raised)

Malte Pietsch

CTO & Co-Founder of Deepset

Malte Pietsch, CTO & Co-Founder of Deepset: $45 Million Raised to Build the Future of LLMs

Aimen Chouchane

Head of Marketing of IntelexVision

How IntelexVision Markets AI Video Analytics to Security Buyers

Jonathan Dambrot

CEO and Founder of Cranium

Jonathan Dambrot, CEO and Founder at Cranium: $32 Million Raised to Power the Future of AI Security

Arvind Jain

CEO of Glean

Arvind Jain: the Story of Glean ($1 Billion Valuation)

Deon Nicholas

CEO & Co-Founder of Forethought

Deon Nicholas, CEO & Co-Founder of Forethought: $92 Million Raised to Power the Future of Customer Support with AI

Ashley Montanaro

CEO and Co-Founder of Phasecraft

Ashley Montanaro, CEO & Co-Founder of Phasecraft: $22 Million Raised to Build the Future of Quantum Software

Michael Louis

Co-Founder & CEO of Cerebrium

How Cerebrium generated millions in ARR through partnerships without a sales team | Michael Louis

Sauraj Gambhir

Co-Founder of Prior Labs

Sauraj Gambhir, Co-Founder of Prior Labs: $9 Million Raised to Build Foundation Models for Structured Data

Meryem Arik

CEO & Co-Founder of Titan ML

Meryem Arik, CEO and Co-Founder of TitanML: $2.8 Million Raised to Revolutionize LLM Deployment

Darko Vukovic

CEO & Founder of PolyAPI

Darko Vukovic, CEO & Founder of PolyAPI: $5 Million Raised to Transform Enterprise Middleware Through Native Code Developm

Ryan Sevey

CEO of Mantium

Ryan Sevey, CEO of Mantium: $12 Million Raised to Help Teams Deploy and Monitor their AI Process Automations

Jason Corso

Co-Founder and Chief Science Officer of Voxel 51

Jason Corso, Co-Founder and Chief Science Officer of Voxel 51: $15.5 Million Raised to Build the Future of Developer Tools for Machine Learning

Igor Jablokov

Chairman of Pryon

Igor Jablokov, Chairman of Pryon: $50+ Million Raised to Build the Future of Enterprise Knowledge Management

Vahan Petrosyan

CEO & Co-Founder of SuperAnnotate

Vahan Petrosyan, CEO & Co-Founder of SuperAnnotate: $53 Million Raised to Build the Future of AI Training Data Infrastructure

Kevin Allen

Head of Marketing of ValidMind

From Journalist to Marketing Leader

Tony Zhang

Founder & CEO of Tera AI

Tony Zhang, Founder & CEO of Tera AI: $8M Raised to Build the Future of Robotics Operating Systems

Johannes Peeters

Co-Founder & CEO of VoxelSensors

Johannes Peeters, Co-Founder & CEO of VoxelSensors: $7 Million Raised to Build the Future of Spatial Computing

Glenn Manoff

CMO of Riverlane

Marketing Tech That Doesn’t Exist Yet with Riverlane’s CMO

Kostas Pardalis

Co-Founder & CEO of Typedef

Why Typedef starts go-to-market activities during the design partner phase instead of after | Kostas Pardalis ($5.5M Raised)

Barbara Lewis

CMO of Coactive AI

The Siebel vs Salesforce Truth Every B2B Marketer Should Know

Tanay Kothari

CEO and Co-Founder of Wispr Flow

How Wispr Flow manufactured viral moments by personally onboarding 500 users on Google Meet | Tanay Kothari ($56M Raised)

Jae Lee

Co-founder & CEO of TwelveLabs

How TwelveLabs sells AI to federal agencies: Mission alignment over process optimization | Jae Lee

Jorge Torres

CEO and Co-Founder of MindsDB

Jorge Torres, CEO and Co-Founder of MindsDB: $50 Million Raised to Enable Developers to Integrate Machine Learning into Applications

Joe Witt

CEO & Co-Founder of Datavolo

Joe Witt, CEO & Co-Founder of Datavolo: $21 Million Raised to Unlock the Potential of Unstructured Data for AI

Robert Nishihara

Robert Nishihara: The GTM Story of Anyscale ($1+ Billion Valuation)

Don Gossen

CEO & Founder of Nevermined

How Nevermined coined “AI commerce” in 2023 to create category language before market adoption | Don Gossen

Douwe Kiela

CEO and Co-Founder of Contextual AI

Douwe Kiela, CEO and Co-Founder of Contextual AI: $20 Million Raised to Build the Future of Enterprise Language Models

Alon Talmor

CEO & Founder of Ask-AI

Alon Talmor, CEO & Founder of Ask-AI: $20 Million Raised to Power the Future of Enterprise AI

Nikola Borisov

CEO & Co-Founder of Deep Infra

Nikola Borisov, CEO & Co-Founder of Deep Infra: $9 Million Raised to Build the Future of AI Model Hosting

Rodrigo Liang

Co-Founder and CEO of SambaNova

Rodrigo Liang: the Story of SambaNova ($5+ Billion Valuation)

Jorge Penalva

CEO & Co-Founder of Lang AI

Jorge Penalva, CEO of Lang.ai: $40 Million+ Raised to Build the Customer Experience Operations Category

Alex Levin

CEO & Co-Founder of Regal AI

Alex Levin, CEO & Co-Founder of Regal AI: $82 Million Raised to Transform Customer Communication with Voice AI