Unmasking A.I. Injustice, with Dr. Joy Buolamwini

Data Science

In this episode of SuperDataScience hosted by our Chief Data Scientist, Jon Krohn, he’s joined by the inimitable Dr. Joy Buolamwini who reveals how she uncovered staggering racial and gender biases in widely used Amazon, Microsoft and IBM algorithms, the firms’ varying (sometimes shocking) responses and how to address these A.I. issues.

Joy has so many huge achievements, we struggled to pare them down but here’s our best shot:
• During her Ph.D. at MIT, her research uncovered extensive racial and gender biases in the A.I. services of big-tech firms including Amazon, Microsoft and IBM.
• The “Coded Bias” documentary she stars in that follows this research has a crazy 100% fresh rating on Rotten Tomatoes.
• Her TED Talk on algorithmic bias has over a million views.
• She founded The Algorithmic Justice League to create a world with more equitable and accountable technology.
• Has been recognized in the Bloomberg 50, Tech Review 35 under 35, Forbes 30 under 30, TIME Magazine’s A.I. 100 and was the youngest person included in Forbes’ Top 50 Women in Tech.
• In addition to her MIT Ph.D, holds a Master’s from the University of Oxford (where she studied as a Rhodes Scholar) and she holds a Bachelor’s in Computer Science from the Georgia Institute of Technology.

This episode should be fascinating to just about anyone! In it, Joy details:
• The research that led her to uncover startling racial and gender biases in widely-used commercial A.I. systems.
• How firms reacted to her discoveries, including which big tech companies were receptive and which were disparaging.
• What we can do to ensure our own A.I. models don’t reinforce historical stereotypes.
• Whether she thinks our A.I. future will be bleak or brilliant.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

 

Getting Value From A.I.

In February 2023, our Chief Data Scientist, Jon Krohn, delivered this keynote on “Getting Value from A.I.” to open the second day of Hg Capital’s “Digital Forum” in London.

read full post

The Chinchilla Scaling Laws

The Chinchilla Scaling Laws dictate the amount of training data needed to optimally train a Large Language Model (LLM) of a given size. For Five-Minute Friday, our Chief Data Scientist, Jon Krohn, covers this ratio and the LLMs that have arisen from it.

read full post

StableLM: Open-Source “ChatGPT”-Like LLMs You Can Fit on One GPU

The folks who open-sourced Stable Diffusion have now released “StableLM”, their first Language Models. Pre-trained on an unprecedented amount of data for single-GPU LLMs (1.5 trillion tokens!), these are small but mighty.

read full post