There’s a 5% chance of AI causing humans to go extinct, say scientists

There’s a 5% chance of AI causing humans to go extinct, say scientists

technology By Jan 04, 2024 No Comments

Understanding the Potential Risks of AI and Its Impact on Humanity

AI researchers have highlighted the slim but concerning possibility of apocalyptic scenarios with the development of superhuman AI. A survey conducted among top AI conferences revealed that almost 58% of researchers expressed a 5% chance of human extinction or other extremely negative AI-related outcomes.

Assessing the Risks

Despite widespread disagreement and uncertainty about these risks, it’s evident that the majority of AI researchers do not find it implausible that advanced AI could pose a threat to humanity. This belief in a non-minuscule risk carries significant implications, signaling the need for careful consideration of AI‘s future development.

However, it’s essential to approach these findings with a balanced perspective. AI expert surveys, as noted by specialists, do not have a strong track record in accurately forecasting future AI developments. Comparisons with previous surveys revealed that many researchers now predict earlier timelines for AI technological milestones, coinciding with recent advances in AI technology.

Anticipated AI Milestones

Looking ahead, AI experts foresee significant advancements in the coming decade, with AI systems having a 50% or higher chance of successfully achieving a wide range of complex tasks. This includes creating new songs indistinguishable from popular hits and coding complex websites from scratch. However, some tasks such as physical installations are expected to take longer.

Notably, the potential development of AI surpassing human capabilities on all tasks is estimated to have a 50% chance of occurring by 2047. Furthermore, there is a possibility that all human jobs become fully automatable by 2116, marking earlier timelines than previously predicted.

Unpredictability and Concerns

Despite these optimistic predictions, it’s crucial to recognize the inherent unpredictability in AI breakthroughs. The field of AI may also face periods of stagnation, as observed in historical cycles. Historically, technological advances have often surprised experts, with AI presenting similar uncertainties.

Beyond superhuman AI risks, immediate concerns related to AI are also pressing. A significant majority of AI researchers, over 70%, highlight substantial or extreme concerns regarding AI-powered scenarios involving deepfakes, manipulation of public opinion, and authoritarian control. There are also worries about AI contributing to disinformation around existential issues like climate change and democratic governance.

Mitigating the Risks

Understanding these potential risks provides a foundation for addressing them effectively. The AI research community must establish robust frameworks for monitoring and guiding the development of advanced AI. This involves prioritizing research in AI safety, ethics, and governance to ensure that responsible practices are upheld throughout AI advancements.

Furthermore, collaboration between industry, academia, and policymakers is crucial to developing regulatory frameworks and guidelines that account for the risks associated with the evolution of AI. Generating public awareness and engagement on AI-related risks and opportunities will further contribute to a collective understanding of the implications and foster broader participation in addressing these challenges.

Conclusion

While the emergence of superhuman AI and its potential impact on humanity raises valid concerns, proactive measures can mitigate the associated risks. By fostering a collective understanding of the possibilities and challenges surrounding AI development, the research community, industry, and policymakers can collaborate to steer AI advancements towards beneficial outcomes while safeguarding against potential threats.

Source: newscientist

No Comments

Leave a comment

Your email address will not be published. Required fields are marked *