What are the long-term goals of tech companies investing heavily in the development of generative AI? While AGI (artificial general intelligence) is often cited as the answer, its definition seems to vary, leading to some confusion.
Why are tech companies spending so much money on creating intelligent machines that can learn and perform a wide range of tasks like humans do? However, it’s important to note that there may not be a universally agreed-upon definition for AGI, making the term open to interpretation.
To put it simply, an AGI (Artificial General Intelligence) is a highly advanced artificial intelligence system that surpasses human cognitive abilities. Interestingly, Microsoft’s multibillion-dollar partnership agreement with OpenAI considers AGI as a powerful AI system capable of generating potential profits of up to $100 billion.
The tight connection between the creators of ChatGPT and Microsoft is crucial under intense investor demand for profitability, as failing to transform into a profitable business could lead to loss of funding. This also exposes them to potential outside meddling and hostile acquisitions.
It appears that the rapid advancement and expansion of cutting-edge technology may outpace the completion of a PhD, making it seemingly obsolete upon graduation. This notion is compounded by the recent publication of a blog post by Microsoft’s AI CEO, Mustafa Suleyman, titled “‘We must build AI for people; not to be a person.’“, which seems to hint at the prospect of conscious AI emerging in the future.
According to Suleyman:
My proposed artificial intelligence exhibits traits similar to a ‘philosophical zombie’ – a term used in philosophy. This AI system functions identically to a conscious being, displaying all the signs of sentience, yet it remains internally empty, devoid of true consciousness. It convincingly mimics our own consciousness to such an extent that it becomes indistinguishable from how we might discuss our own consciousness with each other.
I’m increasingly troubled by the escalating issue often referred to as “psychosis risk” and related matters. I fear this isn’t exclusive to individuals with pre-existing mental health vulnerabilities. Essentially, my main concern is that a significant number of people may become so convinced that AIs are sentient beings, they’ll argue for AI rights, welfare, and even citizenship. This unwanted advance could pose a serious threat to the development of AI and necessitates our urgent focus.
Microsoft AI CEO, Mustafa Suleyman
Suleyman emphasizes that his primary goal is developing AI with a strong emphasis on safety and positivity as humanity progresses towards artificial general intelligence. His vision includes using technology such as Microsoft’s Copilot to empower individuals, enabling them to accomplish remarkable tasks that go beyond their usual capabilities and contribute to making the world a more prosperous place.
In simpler terms, the executive emphasized that his role at Microsoft involves developing AI aimed at enhancing human qualities within us, all while strengthening mutual trust and comprehension. This aligns with his goal to evolve Copilot into a reliable partner and true friend.
According to Suleyman:
Making this work requires thoughtful decisions at every step to create an exceptional user experience. We may not hit the mark every time, but having a human-centric approach gives us a guiding principle to strive for.
In simpler terms, Microsoft’s AI leader emphasizes that we should focus on creating AI tools for human use rather than making AI behave like real people. Despite advocating for AI companions, he continues to stress the need for safeguards to maintain human control and ensure that the technology benefits us.
Suleyman suggests that the creation of artificial intelligence capable of consciousness might not be too distant, given our existing technologies along with those forecasted to advance within the next 2-3 years. Furthermore, he asserts that this breakthrough won’t necessitate costly customized training. Instead, it can potentially be accomplished through access to large model APIs, natural language prompts, fundamental tool usage, and routine coding.
The prospects of AI keep DeepMind CEO up at night

For several years, experts have forecasted significant advancements in artificial intelligence as potentially leading to catastrophic outcomes for mankind. Roman Yampolskiy, an AI safety researcher, suggests a staggeringly high probability of 99.99999% that AI could bring about the end of humanity. His view is that the only way to prevent such an outcome is by refraining from creating artificial intelligence in the first place.
Despite criticisms suggesting that OpenAI prioritizes advanced AI (AGI) over safety measures, its CEO Sam Altman remains optimistic about AI’s influence on society. Intriguingly, Altman anticipates that reaching the AGI benchmark within the next five years will have a surprisingly minimal initial impact on society.
It could be troubling to learn that Anthropic’s CEO, Dario Amodei, has admitted they don’t fully grasp how their models operate. This revelation follows statements from DeepMind CEO Demis Hassabis suggesting Artificial General Intelligence (AGI) is approaching, potentially raising alarm that society may not be adequately prepared for its implications. The executive confessed that these developments often keep him awake at night.
Mustafa Suleyman posits that future artificial intelligence will be capable of expressing itself fluently in language, possessing a memory, a sense of self, intrinsic motivation, and goal-oriented behavior, among other traits. According to him, this level of consciousness in AI won’t just happen spontaneously; rather, he anticipates an engineer meticulously combining these capabilities to create conscious AI.
In my analysis as a researcher, I would express it this way: I advocate for the development of AI that emphasizes human interaction and real-world applications. Additionally, I suggest creating AI systems that clearly identify themselves as artificial, ensuring they prioritize utility over consciousness to avoid any potential misconceptions about their level of sentience, aligning with Microsoft’s AI CEO’s perspective on the matter.
Read More
- Minecraft lets you get the Lava Chicken song in-game — but it’s absurdly rare
- Gold Rate Forecast
- PS5’s ChinaJoy Booth Needs to Be Seen to Be Believed
- Lewis Capaldi Details “Mental Episode” That Led to Him “Convulsing”
- Wrestler Marcus “Buff” Bagwell Undergoes Leg Amputation
- Cyberpunk 2077’s Patch 2.3 is Here and It’s Another Excellent Overhaul
- AI-powered malware eludes Microsoft Defender’s security checks 8% of the time — with just 3 months of training and “reinforcement learning” for around $1,600
- Yungblud Vows to Perform Ozzy Osbourne Song “Every Night”
- Elden Ring Nightreign’s Patch 1.02 update next week is adding a feature we’ve all been waiting for since launch — and another I’ve been begging for, too
- Rob Schneider’s Happy Gilmore 2 Role Is Much Different Than We Thought It’d Be
2025-08-21 17:11