Make Better Decisions with Data, with Dr. Allen Downey

Data Science

In this SuperDataScience episode hosted by our Chief Data Scientist, Jon Krohn, and with many-time bestselling author Allen Downey, is incredible. Learn a ton from him about making better decisions with data, including how to prepare for Black Swan events and how your core beliefs will shift over your life.

Allen:
• Is a Professor Emeritus at Olin College and Curriculum Designer at the learning platform Brilliant.org.
• He was previously a Visiting Professor of Computer Science at Harvard University and a Visiting Scientist at Google.
• He has written 18 books (which he has made available for free online but which are also published in hard copy by major publishers. For example, his books “Think Python” and “Think Bayes” were bestsellers published by O’Reilly).
• His next book, “Probably Overthinking It”, is available for pre-order now.
• Holds a PhD in Computer Science from University of California, Berkeley and Bachelor’s and Masters degrees from the Massachusetts Institute of Technology.

This episode focuses largely on content from Allen’s upcoming book — his first book intended for a lay audience — and so should appeal to anyone who’s keen to learn from an absolutely brilliant writer and speaker on “How to Use Data to Answer Questions, Avoid Statistical Traps, and Make Better Decisions.”

In this episode, Allen details:
• Underused techniques like Survival Analysis that can be uniquely powerful in lots of ordinary circumstances.
• How to better prepare for rare “Black Swan” events.
• How to wrap your head around common data paradoxes such as Preston’s Paradox, Berkson’s Paradox and Simpson’s Paradox.
• What the Overton Window is and how our core beliefs shift relative to it over the course of our lifetime (this is extra trippy).

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

 

Getting Value From A.I.

In February 2023, our Chief Data Scientist, Jon Krohn, delivered this keynote on “Getting Value from A.I.” to open the second day of Hg Capital’s “Digital Forum” in London.

read full post

The Chinchilla Scaling Laws

The Chinchilla Scaling Laws dictate the amount of training data needed to optimally train a Large Language Model (LLM) of a given size. For Five-Minute Friday, our Chief Data Scientist, Jon Krohn, covers this ratio and the LLMs that have arisen from it.

read full post

StableLM: Open-Source “ChatGPT”-Like LLMs You Can Fit on One GPU

The folks who open-sourced Stable Diffusion have now released “StableLM”, their first Language Models. Pre-trained on an unprecedented amount of data for single-GPU LLMs (1.5 trillion tokens!), these are small but mighty.

read full post