The Best A.I. Startup Opportunities, with venture capitalist Rudina Seseri

Data Science

How should an A.I. startup find product-market fit? How do some A.I. startups become spectacularly successful? The renowned (and highly technical!) A.I. venture-capital investor Rudina Seseri answers these questions and more in episode #750 of our Chief Data Scientist, Jon Krohn’s show, the SuperDataScience Podcast.

Rudina:

• Founder and Managing Partner of Glasswing Ventures in Boston.

• Led investments and/or served on the Board of Directors of more than a dozen SaaS startups, many of which were acquired.

• Was named Startup Boston’s 2022 “Investor of the Year” amongst many other formal recognitions.

• Is a sought-after keynote speaker on investing in A.I. startups.

• Executive Fellow at Harvard Business School.

• Holds an MBA from Harvard University.

Today’s episode will be interesting to anyone who’s keen on scaling their impact with A.I., particularly through A.I. startups or investment.

In this episode, Rudina details:

• How data are used to assess venture capital investments.

• What makes particular AI startups so spectacularly successful.

• Her “A.I. Palette” for examining categories of machine learning models and mapping them to categories of training data.

• How Generative AI isn’t a fad, but it is still only a component of the impact that AI more broadly can make.

• The automated systems she has built for staying up to date on all of the most impactful AI developments.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

 

Getting Value From A.I.

In February 2023, our Chief Data Scientist, Jon Krohn, delivered this keynote on “Getting Value from A.I.” to open the second day of Hg Capital’s “Digital Forum” in London.

read full post

The Chinchilla Scaling Laws

The Chinchilla Scaling Laws dictate the amount of training data needed to optimally train a Large Language Model (LLM) of a given size. For Five-Minute Friday, our Chief Data Scientist, Jon Krohn, covers this ratio and the LLMs that have arisen from it.

read full post

StableLM: Open-Source “ChatGPT”-Like LLMs You Can Fit on One GPU

The folks who open-sourced Stable Diffusion have now released “StableLM”, their first Language Models. Pre-trained on an unprecedented amount of data for single-GPU LLMs (1.5 trillion tokens!), these are small but mighty.

read full post