“AI is not taking over” – Microsoft Boss

News

The head of Microsoft’s artificial intelligence division has stated that the company will continue to advance its efforts in developing large AI models, despite concerns raised by some experts in the field regarding the rapid pace and unpredictability of the technology.

Eric Boyd, corporate vice president of Microsoft AI Platforms, emphasised the immense potential of AI to enhance human productivity and foster global economic growth. Dismissing the notion of setting aside such a powerful tool, he affirmed, “We’d be foolish to set that aside.”

In 2019, Microsoft made a significant investment of $1 billion in OpenAI, an AI startup. Leveraging Microsoft’s financial resources and computing capabilities through its Azure cloud computing platform, OpenAI created GPT4, the most powerful “large language model” in existence. Subsequently, Microsoft integrated GPT4’s conversational capabilities into its Bing search engine and incorporated the technology into various software products, such as word processors and spreadsheets, under the name Copilot, a virtual digital assistant.

Boyd clarified that Microsoft’s vision of AI is not centred on planetary domination but on transforming the relationship between humans and computers. He described a future where conventional interfaces, such as keyboards and mice, are redefined, giving way to a more language-based interaction.

Regarding concerns expressed by AI leaders that generative AI models, which can produce text, images, and other outputs, are progressing too rapidly and lacking full comprehension, Boyd acknowledged the expertise of those in the field but emphasised that Microsoft’s work remains focused on practical applications rather than speculative concerns.

Boyd asserted that the current capabilities of language models like ChatGPT are exaggerated, pointing out that these models solely produce text as output and lack the capacity for true takeover. Instead, he expressed more concern about potential AI exacerbating societal issues and emphasised the need to ensure safe and unbiased use of the technology.

While safety is a significant consideration, immediate concerns about AI revolve more around the potential misuse or misapplication of the technology, such as in medical diagnoses or air traffic control. Boyd acknowledged that some decisions lie within the purview of organisations like Microsoft, referencing their choice not to provide facial recognition software to law enforcement agencies. However, he believes that other matters necessitate regulatory intervention, stating, “We definitely think there’s a place for regulation in this industry.”

Microsoft’s collaboration with OpenAI has bolstered its position in the race to deliver AI advancements, but competition from other tech giants, such as Google, is fierce. As the industry pushes forward, society and regulators must accelerate their efforts to define the parameters of safe and responsible AI.