According to Ethereum’s co-creator, Vitalik Buterin, temporarily halting global computational resources might provide additional time for mankind if there’s a risk that an advanced form of artificial intelligence could prove harmful.
In a January 5 blog update, Buterin expanded on his November 2023 post about “defensive accelerationism” (d/acc). He warned that advanced artificial intelligence could potentially emerge within just five years, and it’s uncertain whether the consequences will be favorable.
Vitalik Buterin proposes that temporarily limiting the production of large-scale computing hardware might serve as a way to decelerate AI progression. This could potentially decrease global computational power by as much as 99% for about 1 to 2 years, providing additional time for humanity to prepare and adapt.
An advanced artificial intelligence concept, referred to as a superintelligence, is generally understood as an entity that consistently outperforms human intelligence across all disciplines and knowledge domains.
A significant number of tech leaders and scientists have voiced worries regarding AI, as evidenced by an open letter penned in March 2023 that was signed by more than 2,600 individuals. They raised concerns about potential dangers posed to society and mankind due to the “deep-rooted risks” associated with its development.
However, Buterin clarified that his previous piece merely hinted at avoiding dangerous types of supreme intelligence and now intended to discuss potential solutions for a situation where AI poses significant risks.
Yet, Buterin expressed his willingness to advocate for a hardware soft pause only under the condition that he is convinced “stronger action” like legal accountability is necessary. This implies that individuals utilizing, implementing, or creating AI might face lawsuits if their models cause harm.
The individual suggested that plans for a temporary halt in AI hardware might involve identifying the position of AI chips and requiring their registration, but instead, he proposed an idea where industrial-scale AI hardware could be equipped with a chip that only permits it to keep functioning if it receives a set of three signatures from significant international organizations at least once a week.
In simpler terms, Buterin stated that these signatures would work across various devices (optionally, we could demand a proof showing they were posted on a blockchain). This means once authorized, there wouldn’t be a feasible way to only allow one device to continue functioning without simultaneously authorizing all other devices as well.
Buterin’s proposition for technology development emphasizes a cautious strategy rather than the rapid, unfettered advancements suggested by Accelerationism or Effective Accelerationism.
AI Eye: A bizarre cult is growing around AI-created memecoin ‘religions’
Read More
- AUCTION/USD
- Pregnant Woman’s Dish Soap Craving Blows Up on TikTok!
- Is Disney Faking Snow White Success with Orchestrated Reviews?
- Pokémon Destined Rivals: Release date, pre-order and what to expect
- Stephen A. Smith Responds to Backlash Over Serena Williams Comments
- JK Simmons Opens Up About Recording Omni-Man for Mortal Kombat 1
- Owen Cooper Lands Major Role in Wuthering Heights – What’s Next for the Young Star?
- POL PREDICTION. POL cryptocurrency
- Daredevil: Born Again Spoiler – Is Foggy Nelson Alive? Fan Theory Explodes!
- `Cillian Murphy to Play Quirrell in New Harry Potter Reboot`
2025-01-06 06:07