Moats for SaaS in the GenAI Era

I doubt this is the first article you’ve read about the existential risk that Generative AI (GenAI) is posing to traditional Software-as-a-Service (SaaS) businesses. To recap, barriers to launching new SaaS products have been falling for years with decreased costs (cloud computing, distributed workforces), improved distribution (mobile and social platforms), enabling technologies (improved browser standards, development frameworks) and familiarity amongst buyers.

With these costs falling, access to software engineering talent (especially in the US) was one of the few remaining constraints on SaaS business growth. Sufficient and timely access to this talent became a constraint on revenue potential and also helped lock out the entry of competitors into respective markets. What GenAI portends, though, is a significant democratization of this resource,which will lower costs and competitive barriers.

So are SaaS businesses doomed to be out-competed by AI upstarts? They’re certainly in trouble, but are still moats that maybe going overlooked:

  1. Distribution: Winning over large, paying customers is not something that happens overnight. SaaS companies serving enterprises have had to learn to navigate procurement processes, legal and compliance scrutiny and maintain/grow executive sponsorship to secure meaningful growth in contract value. Buyers are also wary of AI startups misappropriating their data and are requiring additional diligence during procurement. So ironically, buyers’ bureaucracies will probably buy SaaS incumbents some time.
  2. Non-public data: Most GenAI foundation models have been trained on data published on the internet. Firms that have access to non-public data (by virtue of being gatekeeper or large service provider) are well positioned to build compelling AI offerings that are more specific, and less generic, than what an upstart can offer. Consider Epic’s role with health records, LiveAgent’s position with customer service, or Equifax with credit reports.
  3. Evaluations and model selection: There is a growing consensus that foundation models will be the new “chip layer” of this AI stack; most firms are unlikely to build their own foundation models but rather will select a foundation model and then adjust how these models reason with their data to deliver outputs they deem appropriate. Because these models are non-deterministic and hard to inspect, the process and criteria for (a) evaluating the selection of a particular foundation model, and (b) “tweaking” a selected model for use in an application, is unique and valuable knowledge (often called “evals”). Having robust evaluation rubrics will also help companies improve their stacks, and eventually route tasks to different foundation models for more optimal responses. While the cost of model inference continues to fall, companies with stronger evals may be able to reduce their costs more quickly, by utilizing smaller models (with lower inference costs).
  4. Model prompts: Coaxing a foundation model to provide an optimal answer to a problem requires an optimal prompt. Optimality needs to weigh length of response, time to response, format of response and much more. Devising the right input prompt for a problem is not trivial, and also offers a competitive barrier.
  5. Design: Too few companies are thinking beyond the text-box for their AI interfaces. GenAI offers the ability to build new, concise and dynamic interfaces at run-time. Several SaaS companies have proven out the importance of new design in driving product and category adoption. OpenAI’s voice interface is a promising start in this direction.

Leave a comment