ENTERTAINMENT

Apple study reveals major AI flaw in OpenAI, Google, and Meta LLMs

Their reasoning skills may not be as advanced as they seem.

ChatGPT on AppStore displayed on a phone screen and Apple logo dislpayed on a screen in the background

Researchers found some damning flaws in LLMs’ reasoning skills.
Credit: Jakub Porzycki / NurPhoto / Getty Images

Large Language Models (LLMs) may not be as smart as they seem, according to a study from Apple researchers.

LLMs from OpenAI, Google, Meta, and others have been touted for their impressive reasoning skills. But research suggests their purported intelligence may be closer to “sophisticated pattern matching” than “true logical reasoning.” Yep, even OpenAI’s o1 advanced reasoning model.

The most common benchmark for reasoning skills is a test called GSM8K, but since it’s so popular, there’s a risk of data contamination. That means LLMs might know the answers to the test because they were trained on those answers, not because of their inherent intelligence.

To test this, the study developed a new benchmark called GSM-Symbolic which keeps the essence of the reasoning problems, but changes the variables, like names, numbers, complexity, and adding irrelevant information. What they discovered was surprising “fragility” in LLM performance. The study tested over 20 models including OpenAI’s o1 and GPT-4o, Google’s Gemma 2, and Meta’s Llama 3. With every single model, the model’s performance decreased when the variables were changed.

Accuracy decreased by a few percentage points when names and variables were changed. And as the researchers noted, OpenAI’s models performed better than the other open-source models. However the variance was deemed “non-negligible,” meaning any real variance shouldn’t have occurred. However, things got really interesting when researchers added “seemingly relevant but ultimately inconsequential statements” to the mix.

Mashable Light Speed

To test the hypothesis that LLMs relied more on pattern matching than actual reasoning, the study added superfluous phrases to math problems to see how the models would react. For example, “Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?”

What resulted was a significant drop in performance across the board. OpenAI’s o1 Preview fared the best, with a drop of 17.5 percent accuracy. That’s still pretty bad, but not as bad as Microsoft’s Phi 3 model which performed 65 percent worse.

In the kiwi example, the study said LLMs tended to subtract the five smaller kiwis from the equation without understanding that kiwi size was irrelevant to the problem. This indicates that “models tend to convert statements to operations without truly understanding their meaning” which validates the researchers’ hypothesis that LLMs look for patterns in reasoning problems, rather than innately understand the concept.

The study didn’t mince words about its findings. Testing models’ on the benchmark that includes irrelevant information “exposes a critical flaw in LLMs’ ability to genuinely understand mathematical concepts and discern relevant information for problem-solving.” However, it bears mentioning that the authors of this study work for Apple which is obviously a major competitor with Google, Meta, and even OpenAI — although Apple and OpenAI have a partnership, Apple is also working on its own AI models.

That said, the LLMs’ apparent lack of formal reasoning skills can’t be ignored. Ultimately, it’s a good reminder to temper AI hype with healthy skepticism.

Mashable Image

Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master’s degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on Twitter at @cecily_mauran.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Related Articles

Back to top button