Quantum Machine Learning, with Dr. Amira Abbas

Data Science

Brilliant, eloquent Dr. Amira Abbas introduces us to Quantum Machine Learning in this episode of SuperDataScience hosted by our Chief Data Scientist, Jon Krohn. She details the key concepts (like qubits), what’s possible today (Quantum SVMs) and what the future holds (e.g., Quantum Neural Networks).

Amira:
• Is a postdoctoral researcher at the University of Amsterdam as well as QuSoft, a world-leading quantum-computing research institution also in the Netherlands.
• Was previously on the Google Quantum A.I. team and did Quantum ML research at IBM.
• Holds a PhD in Quantum ML from the University of KwaZulu-Natal, during which she was a recipient of Google’s PhD fellowship.

Much of this episode will be fascinating to anyone interested in how quantum computing is being applied to machine learning; there are, however, some relatively technical parts of the conversation that might be best-suited to folks who already have some familiarity with ML.

In this episode, Amira details:
• What Quantum Computing is, how it’s different from the classical computing that dominates the world today, and where quantum computing excels relative to its classical cousin.
• Key terms such as qubits, quantum entanglement, quantum data and quantum memory.
• Where Quantum ML shows promise today and where it might in the coming years.
• How to get started in Quantum ML research yourself.
• Today’s leading software libraries for Quantum ML.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

 

Getting Value From A.I.

In February 2023, our Chief Data Scientist, Jon Krohn, delivered this keynote on “Getting Value from A.I.” to open the second day of Hg Capital’s “Digital Forum” in London.

read full post

The Chinchilla Scaling Laws

The Chinchilla Scaling Laws dictate the amount of training data needed to optimally train a Large Language Model (LLM) of a given size. For Five-Minute Friday, our Chief Data Scientist, Jon Krohn, covers this ratio and the LLMs that have arisen from it.

read full post

StableLM: Open-Source “ChatGPT”-Like LLMs You Can Fit on One GPU

The folks who open-sourced Stable Diffusion have now released “StableLM”, their first Language Models. Pre-trained on an unprecedented amount of data for single-GPU LLMs (1.5 trillion tokens!), these are small but mighty.

read full post