999国产精品999久久久久久,国产三级久久久精品麻豆三级,а天堂中文最新一区二区三区,688欧美人禽杂交狂配,国产精品久久久久久久久人妻

首頁(yè) 展會(huì)資訊 半導(dǎo)體資訊 初創(chuàng)公司Cerebras推出 WSE-3 AI芯片, 4萬(wàn)億個(gè)晶體管!

初創(chuàng)公司Cerebras推出 WSE-3 AI芯片, 4萬(wàn)億個(gè)晶體管!

來(lái)源: 聚展網(wǎng) 2024-03-14 18:10:47 168 分類: 半導(dǎo)體資訊
圖片
Cerebras Systems has announced the launch of Wafer Scale Engine 3 (WSE-3), a groundbreaking AI wafer-scale chip with 4 trillion transistors, 900,000 AI cores, 44GB on-chip SRAM, and peak performance of 125 FP16 PetaFLOPS. This new device is twice as powerful as its predecessor, the WSE-2, and is manufactured using TSMC's 5nm process technology.
WSE-3 powers the CS-3 supercomputer, which can train AI models with up to 24 trillion parameters - a significant leap compared to supercomputers driven by WSE-2 and other modern AI processors. The supercomputer supports external memory ranging from 1.5TB to 1.2PB, allowing it to store large models in a single logical space without partitioning or restructuring, simplifying the training process and enhancing developer efficiency.
圖片
圖片
(Cerebras image)
In terms of scalability, the CS-3 can be configured in clusters of up to 2048 systems. This scalability enables it to fine-tune a 70 billion parameter model in one day through a four-system setup and fully train the Llama 70B model within the same timeframe. The latest Cerebras software framework provides native support for PyTorch 2.0 and accelerates training with dynamic and unstructured sparsity, which is eight times faster than traditional methods.
圖片
圖片
(Cerebras image)
Cerebras highlights the superior power efficiency and ease of use of the CS-3. Despite doubling its performance, the CS-3 consumes the same amount of power as its predecessor. It also simplifies the training of large language models (LLMs), requiring up to 97% less code compared to GPUs. For instance, a GPT-3-sized model requires only 565 lines of code on the Cerebras platform.
The company has received considerable interest in the CS-3, with a backlog of orders from various sectors including enterprise, government, and international cloud providers. Cerebras collaborates with institutions such as Argonne National Laboratory and Mayo Clinic, demonstrating the potential of the CS-3 in healthcare.
Cerebras' strategic partnership with G42 will expand with the construction of Condor Galaxy 3, an AI supercomputer featuring 64 CS-3 systems with up to 57,600,000 cores. Together, the companies have already created the world's two largest AI supercomputers, CG-1 and CG-2, located in California with a combined performance of 8 ExaFLOPs. This collaboration aims to provide global AI computing at dozens of exaFLOPs.
Kiril Evtimov, CTO of G42 Group, said, "Our strategic partnership with Cerebras plays a crucial role in driving innovation at G42 and contributing to the acceleration of the global AI revolution. The upcoming Condor Galaxy 3, with 8 exaFLOPs, is currently under construction and will soon increase our system's total AI computing capacity to 16 exaFLOPs."

參考資料:

中國(guó)無(wú)錫半導(dǎo)體設(shè)備年會(huì)展覽會(huì)

CSEAC

舉辦地區(qū):江蘇

開閉館時(shí)間:09:00-18:00

舉辦地址:無(wú)錫市太湖新城清舒道88號(hào)

展覽面積:48000㎡

觀眾數(shù)量:58000

舉辦周期:1年1屆

主辦單位:中國(guó)電子專用設(shè)備工業(yè)協(xié)會(huì)

相關(guān)標(biāo)簽:
聲明:文章部分圖文版權(quán)歸原創(chuàng)作者所有,不做商業(yè)用途,如有侵權(quán),請(qǐng)與我們聯(lián)系刪除。
來(lái)源:聚展網(wǎng)
展位咨詢
門票預(yù)訂
展商名錄

半導(dǎo)體行業(yè)資訊

  • icon 電話
    展位咨詢:0571-88560061
    觀眾咨詢:0571-88683357
  • icon 客服
  • icon 我的
  • icon 門票
  • 展位
    合作