Microsoft is making waves within the synthetic intelligence (AI) area and after making an investment billions in chatgpt author OpenAI, the corporate is reported to be running by itself AI processors that can be utilized to coach huge language fashions (LLMs).
Citing two other folks with direct wisdom of the challenge, The Information reported that the device massive has been growing the chip since as early as 2019. It is reported to be internally code-named Athena and is already to be had to a small crew of Microsoft and OpenAI. workers, who’re checking out the generation.
“Microsoft is hoping the chip will perform better than it currently buys from other vendors, saving it time and money on its costly AI efforts. Other prominent tech companies, including amazon, Google and Facebook, also make their own in-house chips for AI,” the record stated.
Microsoft supercomputer with OpenAI
Microsoft has already constructed a supercomputer for AI analysis startup OpenAI to coach huge units of fashions. For the supercomputer, the corporate depends upon 1000’s of Nvidia A100 graphics chips strung in combination to make stronger ChatGPT and Bing AI chatbot. It invested $1 billion in OpenAI in 2019 to construct a “massive, cutting-edge supercomputer”.
Microsoft constructed this supercomputer to offer sufficient computing energy to coach and retrain an more and more huge set of AI fashions with nice volumes of information for lengthy classes.
“One of the things we had learned from research is that the larger the model, the more data you have and the longer you can train, the better the accuracy of the model is,” stated Nidhi Chappell, Microsoft head of product for Azure prime -performance computing and AI.
Google’s TPU AI chip
Last 12 months, Google introduced that it evolved an AI chip known as the Tensor Processing Unit (TPU), particularly designed for device studying duties. The TPU is said to maintain trillions of operations in step with 2d and devour low watts of energy.
The tensor processing unit is designed for use with Google’s TensorGlide device, the corporate’s open-source device library for device studying.
Citing two other folks with direct wisdom of the challenge, The Information reported that the device massive has been growing the chip since as early as 2019. It is reported to be internally code-named Athena and is already to be had to a small crew of Microsoft and OpenAI. workers, who’re checking out the generation.
“Microsoft is hoping the chip will perform better than it currently buys from other vendors, saving it time and money on its costly AI efforts. Other prominent tech companies, including amazon, Google and Facebook, also make their own in-house chips for AI,” the record stated.
Microsoft supercomputer with OpenAI
Microsoft has already constructed a supercomputer for AI analysis startup OpenAI to coach huge units of fashions. For the supercomputer, the corporate depends upon 1000’s of Nvidia A100 graphics chips strung in combination to make stronger ChatGPT and Bing AI chatbot. It invested $1 billion in OpenAI in 2019 to construct a “massive, cutting-edge supercomputer”.
Microsoft constructed this supercomputer to offer sufficient computing energy to coach and retrain an more and more huge set of AI fashions with nice volumes of information for lengthy classes.
“One of the things we had learned from research is that the larger the model, the more data you have and the longer you can train, the better the accuracy of the model is,” stated Nidhi Chappell, Microsoft head of product for Azure prime -performance computing and AI.
Google’s TPU AI chip
Last 12 months, Google introduced that it evolved an AI chip known as the Tensor Processing Unit (TPU), particularly designed for device studying duties. The TPU is said to maintain trillions of operations in step with 2d and devour low watts of energy.
The tensor processing unit is designed for use with Google’s TensorGlide device, the corporate’s open-source device library for device studying.