Unlock the Secrets: Key Insights from Stanford University’s 2024 AI Index Report

Time2Future | AI Media
10 min readMay 28, 2024

--

The 500 pages Report gives the main ideas and takeaways of tracking, collating, distilling and visualizing data related to Artificial Intelligence industry and academia.

But what do they mean?

See the Time2Future AI Guide’s comments for every main Report’s point.

Read us on Telegram

1. AI beats humans on some tasks, but not on all.

AI Index technical performance benchmarks vs. human performance

AI has outperformed humans in some areas, like image recognition, understanding pictures, and grasping English. However, it still lags in tougher tasks, such as high-level math, common sense reasoning in visuals, and planning.

All these benchmarks are the parts of descriptions and the main signs of two ways of human reasoning — abstract reasoning (left hemisphere understanding, operation and generation of abstract concepts using words and numbers) and visual reasoning (right hemisphere understanding, operation and generation of images).

In terms of the speed and quality of abstract reasoning (the ability to understand words and numbers and express a thought in words and numbers), AI has approached and even partially surpassed humans by 2023. Now AI has come close to surpassing humans in common sense visual reasoning (the ability to understand an image and express a thought using images such as a picture, diagram, chart, graph, scheme, etc.). And it is likely to happen this year. Why does it matter?

Visual thinking is a very important human skill. Humans get most of their information from visual images — our way of life is made up of them. In humans, this is a trainable, teachable skill. And so is AI.

Thus, AI now has almost the same level of the two main characteristics of human thinking (abstract thinking and visual thinking).

It doesn’t mean that AI is or can be better compared to humans, or that AI will operate with some other “different” levels or “different” characteristics. It means that AI gets it right faster and more often than the majority of humans. AI is better in productivity, but it still thinks like a human.

2. Industry continues to dominate frontier AI research.

Number of notable machine learning models by sector

Almost 60% of all machine learning models produced in 2023 are industrial model.

This means that private companies take advantage of the speed effect and are not willing to give up the benefits of falling behind technologically. The rule works — at the beginning of any race, including the promotion of new technology, it is easier to become a leader, and if resources are available, it is easier to widen the gap and consolidate the position, and thus get the opportunity to dictate the rules of the game, leaving others to catch up. In turn, technological leadership, especially at the beginning of active penetration of a new technology, usually provides the maximum monetary return. Academia and government are objectively less motivated by monetary gain, so they do not seek to dominate the industry. This situation is typical for all technologies that have already penetrated the real market.

Hence two conclusions:

  • AI is a “money-making” technology that is already in the market and already part of the economy;
  • The dominance of industry in the further development and improvement of AI will increase.

In addition, there is a second reason for the dominance of industry over academia and the public sector, and it is a consequence of the first. It is the cost of developing the technology.

And that’s the next point…

3. Frontier models get way more expensive.

Estimated training cost of select AI models

AI Index estimates show that training state-of-the-art AI models has become extremely expensive. For instance, it’s estimated that OpenAI’s GPT-4 required around $78 million for training compute, and Google’s Gemini Ultra cost about $191 million for compute.

The level of numbers suggests that the stage of technological inequality within the industry has already begun.

There are recognized leaders who can afford expensive technology development to maintain their leadership.

4. The United States leads China, the EU, and the U.K. as the leading source of top AI models.

Number of notable machine learning models by geographic area

In 2023, the U.S.-based institutions produced 61 significant AI models, much more than the European Union’s 21 and China’s 15. Meanwhile, in terms of the number of known fundamental models, the US gap is even wider (the U.S.-based institutions produced 109 all AI models, the others — 48).

The demographics of AI developers often differ from those of users. For example, a significant number of prominent AI companies and the datasets used to train models come from Western nations, reflecting Western perspectives. The lack of diversity can perpetuate or even exacerbate societal inequalities and biases.

There are those in the industry who see this as a serious threat to humanity.

From the Mistral founder interview:

“A bigger threat is the monopoly of American companies in the AI market. AI models shape the cultural understanding of the world, and it is important to incorporate the values and cultural codes of different countries.”

This could be a serious limitation for the use of AI outside the Western world in the future, and for the possibility of achieving digital or technological equality on a global scale.

5. Robust and standardized evaluations for LLM responsibility are seriously lacking.

Reported responsible AI benchmarks for popular foundation models

The recently launched Foundation Model Transparency Index reveals that AI developers aren’t very transparent, especially when it comes to sharing details about their training data and methods. This lack of openness makes it harder to fully grasp how reliable and safe AI systems are.

This leads to the problem of deepfakes in AI distribution.

Deepfakes indeed pose a significant challenge, primarily due to their potential to deceive and manipulate. Here are three key impacts:

  • Misinformation and manipulation: Deepfakes can be used to create highly convincing videos or images of individuals saying or doing things they never actually did. This misinformation can spread rapidly on social media and other platforms, leading to public confusion, manipulation of opinions, and even defamation.
  • Undermining trust and authenticity: As deepfake technology becomes more sophisticated, it becomes increasingly difficult to discern real from fake content. This erosion of trust in media and information sources can have far-reaching consequences, affecting everything from public discourse to legal proceedings.
  • Privacy and consent concerns: Deepfakes raise serious ethical questions about the use of people’s likeness without their consent. By superimposing faces or voices onto explicit content or altering videos to create compromising situations, deepfake technology can violate individuals’ privacy and potentially cause harm to their reputation and relationships.

Political deepfakes are already affecting elections across the world, with recent research suggesting that existing AI deepfake detection methods perform with varying levels of accuracy. This includes the ever-present scandals over the use of “other people’s” or “look-alike” voices.

And it also leads to the problem of derivative liability for those who use top fundamental models in their services and yet have no transparent way to compare the risks and limitations of these AI models.

Addressing these challenges will require a multi-faceted approach involving technology development, policy interventions, media literacy initiatives, and public awareness campaigns.

6. Generative AI investment skyrockets.

Private investment in generative AI

Even though overall private investment in AI dropped last year, funding for generative AI skyrocketed, nearly increasing eightfold from 2022 to $25.2 billion. Major companies in the generative AI field, like OpenAI, Anthropic, Hugging Face, and Inflection, reported significant fundraising rounds.

The key point is that investors are focusing more on specific areas of AI, such as natural language processing and data management. Private companies are expecting a return on investment from Generative AI because the most obvious results are from its real-world use.

This means that Gen AI is becoming part of the real economy and market.

And the following two takeaways are also responsible for this.

7. The data is in: AI makes workers more productive and leads to higher quality work.

Number of Fortune 500 earnings calls mentioning AI

Since 2018, the number of mentions of artificial intelligence in Fortune 500 company reports has nearly doubled. The most frequently mentioned topic, appearing in 19.7% of all earnings calls, was generative AI.

There has been an increase in the number of surveyed organizations reporting cost reductions and increased revenue as a result of AI adoption (including generative AI).

This indicates a significant increase in business efficiency through AI.

In 2023, several studies examined the impact of AI on work. They found that AI helps workers complete tasks faster and improves the quality of their work. AI also shows promise in closing the skills gap between low-skilled and high-skilled workers. However, some studies warn that using AI without proper oversight can reduce performance.

8. Scientific progress accelerates even further, thanks to AI.

“In 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications — from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery.”

Especially, AI is helping medicine make significant advances. Highly skilled medical AI has arrived. In recent years, AI systems have shown remarkable improvement on the MedQA (Medical Question Answering) benchmark, a key test for assessing AI’s clinical knowledge.

At the same time, over the past year alone, the share of industry-bound AI PhDs has risen by 5.3 percentage points, indicating an intensifying brain drain from universities into industry.

“In 2011, roughly equal percentages of new AI PhDs took jobs in industry (40.9%) and academia (41.6%). However, by 2022, a significantly larger proportion (70.7%) joined industry after graduation compared to those entering academia (20.0%).”

The reason is point 6.

And it means that the increase and penetration of AI (especially Gen AI) in business and industry will continue.

9. The number of AI regulations in the United States sharply increases.

Number of AI-related bills passed into law by country

But it is not just in the USA.

The growing capabilities of AI have caught the attention of policymakers. In the past year, countries such as the United States and the European Union have introduced significant AI-related policies. This surge in policy shows that policymakers are increasingly aware of the need to regulate AI and enhance their countries’ ability to benefit from its potential.

There are different views on this trend.

On the one hand, increased regulation of AI should solve such problems with lack of standardization and responsible use (what it leads to, we talked about in point 5).

On the other hand, industry leaders say it is not advisable.

From the Mistral founder interview:

“Legislative regulation of AI is inappropriate and hurts innovation. France lobbied to limit the regulation of open source companies in the EU AI law. This has helped Mistral grow rapidly.”

In our view, the problem should hardly be discussed in terms of “to regulate or not to regulate”. Rather, it is more important to define the criteria of “how” and “why” to regulate.

If we start to regulate AI as a potential competitor that is alien to humans, it will indeed lead to harmful restrictions on development. Such regulation is likely to be unnatural (such as trying to limit AI’s access to information that is available to humans, whatever that may be). But if regulation is implemented to support the development of AI as a tool to enhance humanity’s capabilities by creating clear rules of the game for all industry participants, it is more likely to be beneficial.

10. People across the globe are more cognizant of AI’s potential impact — and more nervous.

Global opinions on products and services using AI

According to the survey, over the past year, the percentage of people who believe AI will have a significant impact on their lives in the next three to five years has increased from 60% to 66%. In addition, 52% are nervous about AI products and services, up 13 percentage points from 2022.

As AI becomes more common, it’s important to understand how public perceptions of the technology change. Knowing public opinion helps predict AI’s societal impacts and how its adoption may vary across different countries and demographic groups.

Public sentiment about AI is becoming an increasingly important consideration in tracking the progress of AI.

It means there are signs that we are facing a new age of economic and social relations. We are beginning to live among AI works and AI decisions. We need to think and do something in the field of AI ethics.

And the main question is: “Is AI something more than just the next or another level of automation?”. And if so, do we need to create different or new rules for it and for us in this new age?

Or should we use our human ethics and there is no other ethics than human ethics. Maybe because of this concept, rules and laws should exist for humans, not for tools, even if the tool is Artificial Intelligence.

However, we should know AI, we need to understand AI, and we can use AI.

by Time2Future

All data for this article is from the AI Index 2024 report and is publicly available. All conclusions and insights in this article are the opinion of the editors; if used, a link to the Time2Future AI Guide is required.

--

--

Time2Future | AI Media
Time2Future | AI Media

Written by Time2Future | AI Media

Decoding AI: research, reviews, tutorials and answers to the top questions about AI.

No responses yet