How A.I. is Transforming Science

Data Science

A.I. is not just a tool, but a driving force in reshaping the landscape of science. In this SuperDataScience episode hosted by our Chief Data Scientist, Jon Krohn, he dives into the profound implications A.I. holds for scientific discovery, citing applications across nuclear fusion, medicine, self-driving labs and more.

Here are some of the ways A.I. is transforming science that are covered in this episode:

Antibiotics: MIT researchers uncovered two new antibiotics in a single year (antibiotic discovery is very rare so this is crazy!) by using an ML model trained on the efficacy of known antibiotics to sift through millions of potential antibiotic compounds.

Batteries: Similar sifting was carried out by A.I. at the University of Liverpool to narrow down the search for battery materials from 200,000 candidates to just five highly promising ones.

Weather: Huawei’s Pangu-Weather and NVIDIA’s FourCastNet use ML to offer faster and more accurate forecasts than traditional super-compute-intensive weather simulations — crucial for predicting and managing natural disasters.

Nuclear Fusion: AI is simplifying the once-daunting task of controlling plasma in tokamak reactors, thereby contributing to advancements in clean energy production.

Self-Driving Labs: Automate research by planning, executing, and analyzing experiments autonomously, thereby speeding up scientific experimentation and unveiling new possibilities for discovery.

Generative A.I.: Large Language Models (LLMs) tools are pioneering new frontiers in scientific research. From improving image resolution to designing novel molecules, these tools are yielding tangible results, with several A.I.-designed drugs currently in clinical trials. Tools like Elicit are streamlining the process of scientific literature review over vast corpora, allowing connections within or between fields to be uncovered automatically and suggesting new research directions.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

 

Getting Value From A.I.

In February 2023, our Chief Data Scientist, Jon Krohn, delivered this keynote on “Getting Value from A.I.” to open the second day of Hg Capital’s “Digital Forum” in London.

read full post

The Chinchilla Scaling Laws

The Chinchilla Scaling Laws dictate the amount of training data needed to optimally train a Large Language Model (LLM) of a given size. For Five-Minute Friday, our Chief Data Scientist, Jon Krohn, covers this ratio and the LLMs that have arisen from it.

read full post

StableLM: Open-Source “ChatGPT”-Like LLMs You Can Fit on One GPU

The folks who open-sourced Stable Diffusion have now released “StableLM”, their first Language Models. Pre-trained on an unprecedented amount of data for single-GPU LLMs (1.5 trillion tokens!), these are small but mighty.

read full post