- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.
Can someone explain why they don’t take the approach where things are somewhat compartmentalized. So you have a image processing program, a math program, a music program, etc and like the human brain that has cross talk but also dedicated certain parts of your brain to do specific things.
That’s an eventual goal, which would be a general artificial intelligence (AGI). Different kind of AI models for (at least some) of the things you named already exist, it’s just that OpenAI had all their eggs in the GPT/LLM basket, and GPTs deal with extrapolating text. It just so happened that with enough training data their text prediction also started giving somewhat believable and sometimes factual answers. (Mixed in with plenty of believable bullshit). Other data requires different training data, different models, and different finetuning, hence why it takes time.
It’s highly likely for a company of OpenAI’s size (especially after all the positive marketing and potential funding they got from ChatGPT in it’s prime), that they already have multiple AI models for different kinds of data either in research, training, or finetuning already.
But even with all the individual pieces of an AGI existing, the technology to cross reference the different models doesn’t exist yet. Because they are different models, and so they store and express their data in different ways. And it’s not like training data exists for it either. And unlike physical beings like humans, it doesn’t have any kind of way to “interact” and “experiment” with the data it knows to really form concrete connections backed up by factual evidence.
It does that, they’re called expert subnetworks, but they’ve been screwing with them and now they’re kind of fucked.
Getting information into and out of those domains benefits from better language models. Suppose you have an excellent model for solving math problems. It’s not very useful if it rarely correctly understands the problem you’re trying to solve, or cannot explain the solution to you in a meaningful way.
A similar way in which language models are already used today, is to use their predictive capabilities to infer from your question which model(s) might be useful in responding, gather additional relevant information, and to repackage this information as suitable inputs to more specialized models or external systems.
Someone with more knowledge may have a better response than me, but as far as I understand it GPT-x (3.5 or 4) is what’s called a “large language model” it’s a neural network that predicts natural language. I don’t believe AGI is the goal of OpenAI’s product, I believe natural language processing and prediction is.
ChatGPT in particular is a product simply demonstrating the capability of the GPT models, and while I’m sure openai themselves could build out components of the interface to interact with discrete knowledge like math, modifying the output of the LLM to be more accurate in many cases, it’s my opinion that it would defeat the entire purpose of the product.
The fact that they have achieved what they have already is absolutely mind boggling, I’m sure that the precise solution you’re talking about is on the horizon, I personally know several developers actively working on systems that mirror the thoughts you’ve expressed here.