Harnessing GPT-4 for Your Commercial Advantage

Data Science

Our Chief Data Scientist, Jon Krohn’s, second episode in his trilogy on GPT-4 is dedicated to how you can leverage GPT-4 to your commercial benefit. In it, he’s joined by Vin Vashishta — perhaps the best person on the planet for covering A.I. monetization.

Vin:
• Is Founder of V Squared, a consultancy that specializes in monetizing machine learning by helping Fortune 100 companies with A.I. strategy.
• Is the creator of a four-hour course on “GPT Monetization Strategy” which teaches how to build new A.I. products, startups, and business models with GPT models like ChatGPT and GPT-4.
• Is author of the forthcoming book “From Data To Profit: How Businesses Leverage Data to Grow Their Top and Bottom Lines”, which will be published by Wiley.

This episode will be broadly appealing to anyone who’d like to drive commercial value with the powerful GPT-4 model that is taking the world by storm.

In this episode, Vin details:
• What makes GPT-4 so much more commercially useful than any previous A.I. model.
• The levels of A.I. capability that have been unleashed by GPT-4 and how we can automate or augment specific types of human tasks with these new capabilities.
• The characteristics that enable individuals and organizations to best take advantage of foundation models like GPT-4 enabling them overtake their competitors commercially.

The SuperDataScience GPT-4 trilogy is comprised of:
#666: a ten-minute GPT-4 overview by Jon.
#667: GPT-4 commercial opportunities.
#668: world-leading A.I.-safety expert Jeremie Harris joins Jon to detail the (existential!) risks of GPT-4 and the models it paves the way for.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

 

Getting Value From A.I.

In February 2023, our Chief Data Scientist, Jon Krohn, delivered this keynote on “Getting Value from A.I.” to open the second day of Hg Capital’s “Digital Forum” in London.

read full post

The Chinchilla Scaling Laws

The Chinchilla Scaling Laws dictate the amount of training data needed to optimally train a Large Language Model (LLM) of a given size. For Five-Minute Friday, our Chief Data Scientist, Jon Krohn, covers this ratio and the LLMs that have arisen from it.

read full post

StableLM: Open-Source “ChatGPT”-Like LLMs You Can Fit on One GPU

The folks who open-sourced Stable Diffusion have now released “StableLM”, their first Language Models. Pre-trained on an unprecedented amount of data for single-GPU LLMs (1.5 trillion tokens!), these are small but mighty.

read full post