0

AI Report: Measuring trends in AI

Does your team have SMART marketing goals in place? Do you have insight into your marketing ROI to maximize spend? Get in touch with us today for a free consultation!

ArtificalIntelligence

The Policy Challenge of AI Safety Conference” took place on Monday, April 15, 2024 at the Hoover Institute on the Stanford University campus. YouTube video recap here.

AI safety is an ongoing policy challenge that has prompted a wide spectrum of views ranging from existential fears to skeptical dismissals.

The AI Index report tracks, and visualizes data related to artificial intelligence (AI)  to develop a more thorough and nuanced understanding of the complex field of AI.

Top 10 Takeaways

1. AI beats humans on some tasks, but not on all. AI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning.

2. Industry continues to dominate frontier AI research. In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high.

3. Frontier models get way more expensive. According to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI’s GPT-4 used an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million for compute.

4. The United States leads China, the EU, and the U.K. as the leading source of top AI models. In 2023, 61 notable AI models originated from U.S.-based institutions, far outpacing the European Union’s 21 and China’s 15.

5. Robust and standardized evaluations for LLM responsibility are seriously lacking. New research from the AI Index reveals a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.

6. Generative AI investment skyrockets. Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.

7. The data is in: AI makes workers more productive and leads to higher quality work. In 2023, several studies assessed AI’s impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI’s potential to bridge
the skill gap between low- and high-skilled workers. Still, other studies caution that using AI without proper oversight can lead to diminished performance.

8. Scientific progress accelerates even further, thanks to AI. In 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications— from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery.

9. The number of AI regulations in the United States sharply increases. The number of AI- related regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.

10. People across the globe are more cognizant of AI’s potential impact—and more nervous. A survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022.

In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 37% in 2022.

AI Advantage For Businesses:

AI is a powerful tool that offers a wide range of benefits from boosting efficiency to improving customer experiences efficiency, productivity, and profitability.

AI Safety For All:

The Hoover Institution has taken a leading position by hosting The Policy Challenge of AI Safety Conference to explore the catastrophic risk and benefits and tracking the actual policy.

All speakers listed below are currently working on policy initiatives, conducting initial risk assessments, and developing methods to evaluate frontier models.

Get the 2024 Stanford AI Index here.

Key Policy Challenges Surrounding AI Safety:

  • Accountability and Transparency: Ensure that AI systems are accountable to humans, and that their decision-making processes are transparent and understandable.
  • Fairness, Ethics, and Human Rights: Mitigate bias in AI systems, and ensure that they are used in a way that respects human rights and ethics
  • Global Governance and International Cooperation: Develop international policies and regulations that address the safety risks of AI.

 

ArtificalIntelligence

Advanced AI Risk Factors: Presented by Yoshua Bengio

  • Can be deployed at scale: resulting harm could be global with no warning.
  • Safety evaluation not mature and can require significant effort
  • AI developers have little understanding of how capabilities are achieved
  • Failed attempts to prevent overtly harmful behaviors
  • Some capabilities are particularly concerning:
    • Autonomous goal-pursuit
    • Persuasion and manipulation
    • Hacking
    • R&D capabilities, helping develop (possibly harmful) AI

 

Risk may increase as models use tools: Presented by AI Safety Institute (The First Nine Months)

  • Automous systems, replication and improvement test
    • Very simple tasks: Find a specific piece of information in a paper/article
    • Medium tasks: Debug harder code/infrastructure issues
    • Hard tasks: Create a simple software project
    • Very hard tasks: Improve and agent codebase

These are just a few of the many challenges that policymakers are grappling with as AI technology continues to develop.

There are a number of organizations working on AI content safety issues, to stay connected follow https://hai.stanford.edu/ & https://aiindex.stanford.edu

 

ArtificalIntelligence

Session One:  The Emerging Global Agendas for AI Safety

Speakers:

  • Marietje Schaake, International policy director, Stanford University Cyber Policy Center; International policy fellow, Stanford’s Institute for Human-Centered Artificial Intelligence
  • Philip Zelikow, Botha-Chan Senior Fellow at Stanford University’s Hoover Institution

 

Session Two: The Evolving Understanding of Possible Concerns

Speakers:

  • Yoshua Bengio, Full Professor at Université de Montréal; Founder and Scientific Director of Mila – Quebec AI Institute
  • Elizabeth Kelly, Director of the U.S. Artificial Intelligence Safety Institute at the National Institute of Standards and Technology (NIST), U.S. Department of Commerce

 

Session Three: Theory to Practice: A Report from the World’s First AI Safety Institute

Speakers:

  • Henry De Zoete, Adviser to the Prime Minister and Deputy Prime Minister on AI
  • Ollie Ilott, Director of the AI Safety Institute
  • Jade Leung, Chief Technology Officer (CTO) of the AI Safety Institute

 

Session Four: AI Safety and the Private Sector

Speakers:

  • Reid Hoffman, Co-Founder Of LinkedIn, Co-Founder Of inflection AI, Partner at Greylock
  • Eric Schmidt, Former CEO & Chairman Of Google
Share Post
Facebook
Twitter
LinkedIn
Reddit
Email

Interested in learning more? Get in touch with us today!

Leave a Reply

Recent Blogs