The (Short) Path to Artificial General Intelligence, with Dr. Ben Goertzel

Data Science

The luminary Dr. Ben Goertzel details how we could realize Artificial General Intelligence (AGI) in 3-7 years, why he’s optimistic about the Artificial Super Intelligence (ASI) this would trigger, and what post-Singularity society could be like.

Dr. Goertzel:
• Is CEO of SingularityNET, a decentralized open market for A.I. models that aims to bring about AGI and thus the singularity that would transform society beyond all recognition.
• Has been Chairman of The AGI Society for 14 years.
• Has been Chairman of the foundation behind OpenCog — an open-source AGI framework — for 16 years.
• Was previously Chief Scientist at Hanson Robotics Limited, the company behind Sophia, the world’s most recognizable humanoid robot.
• Holds a PhD in mathematics from Temple University and held tenure-track professorships prior to transitioning to industry.

This SuperDataScience episode hosted by our Chief Data Scientist, Jon Krohn, has parts that are relatively technical, but much of the episode will appeal to anyone who wants to understand how AGI — a machine that has all of the cognitive capabilities of a human — could be brought about and the world-changing impact that would have.

In the episode, Ben details:
• The specific approaches that could be integrated with deep learning to realize, in his view, AGI in as few as 3-7 years.
• Why the development of AGI would near-instantly trigger the development of ASI — a machine with intellectual capabilities far beyond humans’.
• Why, despite triggering the singularity — beyond which we cannot make confident predictions about the future — he’s optimistic that AGI will be a positive development for humankind.
• The connections between self-awareness, consciousness and the ASI of the future.
• With admittedly wide error bars, what a society that includes ASI may look like.

The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

 

Getting Value From A.I.

In February 2023, our Chief Data Scientist, Jon Krohn, delivered this keynote on “Getting Value from A.I.” to open the second day of Hg Capital’s “Digital Forum” in London.

read full post

The Chinchilla Scaling Laws

The Chinchilla Scaling Laws dictate the amount of training data needed to optimally train a Large Language Model (LLM) of a given size. For Five-Minute Friday, our Chief Data Scientist, Jon Krohn, covers this ratio and the LLMs that have arisen from it.

read full post

StableLM: Open-Source “ChatGPT”-Like LLMs You Can Fit on One GPU

The folks who open-sourced Stable Diffusion have now released “StableLM”, their first Language Models. Pre-trained on an unprecedented amount of data for single-GPU LLMs (1.5 trillion tokens!), these are small but mighty.

read full post