June 12
1.0
Section Text 1.1
artificial intelligence

A Conversation on the Future of AI with Stephen Wolfram

By: John Dalton
Share
founderiepitching

Generative AI is all the rage now. Coverage of tools like ChatGPT, DALL-E2, and Midjourney has been breathless, with pundits and practitioners predicting either doom or a new golden age for computing. What is really going on here?

When

Thursday, April 27, 2023

9:00 a.m. – 10:00 a.m. ET

Where

Zoom

Meeting ID: 994 3158 6099
Passcode: 253444

  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print

To find out, FCAT recently hosted a conversation with Stephen Wolfram, CEO of Wolfram Research. Creator of Mathematic, Wolfram|Alpha, Wolfram Language and author of “A New Kind of Science” among other works, Stephen has played a critical role in developing the vision and work toward computational knowledge for decades. John Dalton, VP of Research at FCAT, spoke with Stephen about generative AI, and what we might expect going forward.

  1. Hi Stephen, thanks for joining us. What makes generative AI models important?

There are two things to understand here. One is the fact that it can handle natural language as an interface modality. And the other thing is that it can inject enough common sense into dealing with that natural language. Fundamentally, its day job is to complete pieces of text. So you say, “The cat sat on the . . . “ and the question is, what's going to be the next word. So how do you figure out what the next word should be? Well, it’s ingested about 4 billion web pages, a decent fraction of all public web pages that are out there. And if you look on all those web pages the chances are you'll see if you say the cat sat on the “blank” there will be a large number of instances where the next word is “mat” and so it can just deduce from sort of ingesting those web pages that the next word is “mat.”

But you can't go that far with that, because there isn't enough data on the web to handle the typical things where you say, you know, “in my portfolio I have this and not that.” You won't find a particular collection of sentences on the web that have those words in them so that you can deduce statistically what the next word should be. So you have to have some kind of model, some way of extrapolating beyond the mere content of the web. The surprising thing is that neural nets successfully do that. That's similar to the way that humans seem to do it.

The whole ability to fluently deal with human natural language and in large lumps is something new and quite unexpected. A big breakthrough was with image recognition. It looked like things were getting better, but nobody really expected the level of fluency, of dealing with natural language that we see in ChatGPT. In a sense that may not be that surprising, because the architecture of a neural net is something that was originally inspired by what people observed in actual brains, with neurons and the connections between neurons and so on. Even so, getting something that will fluently reproduce a model of how humans seem to generate natural language, getting these results with 175 billion connections, a few 1 million neurons, you know, 400 layers of neural networks, that is new.

  1. And these rules about how human languages work, certain generative AI models are building those on their own, right?

Computation, as a general matter is about taking systems of rules and applying them. In other words, there's a style of computation that replicates what we humans do and have written about on our web pages. And there's a style of computation that just goes off into the uncharted computational universe. That's where ultimate computational creativity comes from. There's a vast ocean of possible systems of rules that has no alignment with the things that we happen to have written on those billions of web pages.

In a sense, a lot of the advance of knowledge has to do with identifying. There's this repeated thing that's happening. We identify it, give it a word. Let's start thinking in terms of it. Once we had telescopes, we can see, you know, out into the distance. So then we can ponder the universe. Similarly, we have paradigms for thinking about things. Now we can think about things in terms of computation and how that could extend our domain into the sort of ocean of computational possibility.

For example, with large language models we might be able to see relationships or analogies between domains of knowledge that we’ve not seen before. Maybe some areas like metamathematics have deep analogies to some area of physics that we’ve never considered. These large language model type systems will be able to make some of those grand analogies. Why? Well, it's because they've read lots of language about mathematics. They've read lots of language about physics. The sort of pattern of what's happening on one side is the same as the pattern of what's happening on the other side. So we can swap out the words for mathematics. We can put in the words for physics, and by golly they’re saying the same things!

  1. I like the term “ocean of computational possibility.” A lot of this stuff we find out there we won’t understand, much less be able to predict, right?

The concept of computational irreducibility is one that I started working on in the 1980s. Here’s the point. Let’s say you’ve got some computational rules by which a system should operate. You might say “great, I’m done. I know what the system is going to do.” Well, that’s not so because while you can run the rule for a million steps you can't predict in advance what's going to happen. There's no way of reducing the computational work of just following those steps to see what happens. So you can't say for certain that after a billion steps the thing won't go crazy. You have to just follow each step and see what happens, that’s the idea of computational irreducibility.

  1. That sounds tricky, especially in a regulated environment like financial services, or for that matter in any situation where the stakes are really high.

Yes, you have a choice. You can say, No, I don't want to use a computational system that shows computational irreducibility. I want a system where I can jump ahead and know what it’s going to do after 1 billion steps. But if you do that, you cannot use the full power of a computational system that it is figuring out stuff that you couldn't figure out. You can't both say I'm going to guarantee it's not going to be something bad, and I'm going to get the best advantage of its computational capabilities. If you’re talking about launching a rocket going to the moon, it’s a bad idea to use this kind of computing. But if you’re searching for something where you know that if you find it will be a big win, and if you don’t find it, the only cost is tolerating some goofy results, then it’s no big deal.

  1. Given how facile these large language models (LLMs) already are with natural language processing, I assume that they will develop their own languages over time and our relationship with these systems is going to be very different from what we’re accustomed to.

They’ve already developed their own language. Think about the human brain. Every human brain works in a slightly different way. The fact that we can communicate is a consequence of the fact that we can sort of package up our ideas in this kind of robust form of human language like words, and so on. They are the sort of the robust transmission medium from one mind that has one particular structure to another mind that has some other structure. I think you’ll have the same thing between two AIs. They won’t be exactly the same but can communicate.

And we’re going to need people who can listen to and understand these systems, have a human narrative about what they’re doing. There will be new job categories, like AI wranglers, AI psychologists. We’ve already got prompt engineers. For example, look at image generation AIs. If you tell it to make a picture of a cat in a party hat, it can do that. Then you start to make changes in the prompt. The cat starts distorting in weird ways. You’re changing the AI and eventually it loses it completely and creates some weird kind of pseudo-cat. This is where it feels very much like early natural science or early biology. What on earth is going on? In a sense, science is this effort to take what exists in nature and turn it into something that you can stuff in a human mind, so to speak, so that we can have an understandable story about it. We've got the same thing here. AIs and LLMs are kind of a new thing in nature, and we’re going to need a new science for them.

  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print
John Dalton is VP Research at FCAT, where he studies emerging interfaces (augmented reality, virtual reality, speech, gesture, biometrics), socioeconomic trends, and deep technologies like synthetic biology and robotics.
1092531.1.0
close
Please enter a valid e-mail address
Please enter a valid e-mail address
Important legal information about the e-mail you will be sending. By using this service, you agree to input your real e-mail address and only send it to people you know. It is a violation of law in some jurisdictions to falsely identify yourself in an e-mail. All information you provide will be used by Fidelity solely for the purpose of sending the e-mail on your behalf.The subject line of the e-mail you send will be "Fidelity.com: "

Your e-mail has been sent.
close

Your e-mail has been sent.

This website is operated by Fidelity Center for Applied Technology (FCAT)® which is part of Fidelity Labs, LLC (“Fidelity Labs”), a Fidelity Investments company. FCAT experiments with and provides innovative products, services, content and tools, as a service to its affiliates and as a subsidiary of FMR LLC. Based on user reaction and input, FCAT is better able to engage in technology research and planning for the Fidelity family of companies. FCATalyst.com is independent of fidelity.com. Unless otherwise indicated, the information and items published on this web site are provided by FCAT and are not intended to provide tax, legal, insurance or investment advice and should not be construed as an offer to sell, a solicitation of an offer to buy, or a recommendation for any security by any Fidelity entity or any third-party. In circumstances where FCAT is making available either a product or service of an affiliate through this site, the affiliated company will be identified. Third party trademarks appearing herein are the property of their respective owners. All other trademarks are the property of FMR LLC.


This is for persons in the U.S. only.


245 Summer St, Boston MA

© 2008-2024 FMR LLC All right reserved | FCATalyst.com


Terms of Use | Privacy | Security | DAT Support