Open-Source “Responsible A.I.” Tools, with Ruth Yakubu

Data Science

In this SuperDataScience episode hosted by our Chief Data Scientist, Jon Krohn, Ruth Yakubu details what Responsible A.I. is and open-source options for ensuring we deploy A.I. models — particularly the Generative variety that are rapidly transforming industries — responsibly.

• Has been a cloud expert at Microsoft for nearly seven years; for the past two, she’s been a Principal Cloud Advocate that specializes in A.I.
• Previously worked as a software engineer and manager at Accenture.
• Has been a featured speaker at major global conferences like Websummit.
• Studied computer science at the University of Minnesota.

In this episode, Ruth details:
• The six principles that underlie whether a given A.I. model is responsible or not.
• The open-source Responsible A.I. Toolbox that allows you to quickly assess how your model fares across a broad range of Responsible A.I. metrics.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at


Getting Value From A.I.

In February 2023, our Chief Data Scientist, Jon Krohn, delivered this keynote on “Getting Value from A.I.” to open the second day of Hg Capital’s “Digital Forum” in London.

read full post

The Chinchilla Scaling Laws

The Chinchilla Scaling Laws dictate the amount of training data needed to optimally train a Large Language Model (LLM) of a given size. For Five-Minute Friday, our Chief Data Scientist, Jon Krohn, covers this ratio and the LLMs that have arisen from it.

read full post

StableLM: Open-Source “ChatGPT”-Like LLMs You Can Fit on One GPU

The folks who open-sourced Stable Diffusion have now released “StableLM”, their first Language Models. Pre-trained on an unprecedented amount of data for single-GPU LLMs (1.5 trillion tokens!), these are small but mighty.

read full post