"Turning visions int...Ā ā¢Ā 4m
ANT Group Uses Domestic Chips to train AI models and cut COSTS Ant Group is relying on Chinese-made semiconductors to train artificial intelligence models to reduce costs and lessen dependence on restricted US technology, according to people familiar with the matter. The Alibaba-owned company has used chips from domestic suppliers, including those tied to its parent, Alibaba, and Huawei Technologies to train large language models using the Mixture of Experts (MoE) method. The results were reportedly comparable to those produced with Nvidiaās H800 chips, sources claim. While Ant continues to use Nvidia chips for some of its AI development, one sources said the company is turning increasingly to alternatives from AMD and Chinese chip-makers for its latest models. The development signals Antās deeper involvement in the growing AI race between Chinese and US tech firms, particularly as companies look for cost-effective ways to train models. The experimentation with domestic hardware reflects a broader effort among Chinese firms to work around export restrictions that block access to high-end chips like Nvidiaās H800, which, although not the most advanced, is still one of the more powerful GPUs available to Chinese organisations. Ant has published a research paper describing its work, stating that its models, in some tests, performed better than those developed by Meta. Bloomberg News, which initially reported the matter, has not verified the companyās results independently. If the models perform as claimed, Antās efforts may represent a step forward in Chinaās attempt to lower the cost of running AI applications and reduce the reliance on foreign hardware. MoE models divide tasks into smaller data sets handled by separate components, and have gained attention among AI researchers and data scientists. The technique has been used by Google and the Hangzhou-based startup, DeepSeek. The MoE concept is similar to having a team of specialists, each handling part of a task to make the process of producing models more efficient. Ant has declined to comment on its work with respect to its hardware sources. Training MoE models depends on high-performance GPUs which can be too expensive for smaller companies to acquire or use. Antās research focused on reducing that cost barrier. The paperās title is suffixed with a clear objective: Scaling Models āwithout premium GPUs.ā [our quotation marks] The direction taken by Ant and the use of MoE to reduce training costs contrast with Nvidiaās approach. CEO Officer Jensen Huang has said that demand for computing power will continue to grow, even with the introduction of more efficient models like DeepSeekās R1. His view is that companies will seek more powerful chips to drive revenue growth, rather than aiming to cut costs with cheaper alternatives. Nvidiaās strategy remains focused on building GPUs with more cores, transistors, and memory. According to the Ant Group paper, training one trillion tokens ā the basic units of data AI models use to learn ā cost about 6.35 million yuan (roughly $880,000) using conventional high-performance hardware. The companyās optimised training method reduced that cost to around 5.1 million yuan by using lower-specification chips. Ant said it plans to apply its models produced in this way ā Ling-Plus and Ling-Lite ā to industrial AI use cases like healthcare and finance. Earlier this year, the company acquired Haodf.com, a Chinese online medical platform, to further Antās ambition to deploy AI-based solutions in healthcare. It also operates other AI services, including a virtual assistant app called Zhixiaobao and a financial advisory platform known as Maxiaocai. āIf you find one point of attack to beat the worldās best kung fu master, you can still say you beat them, which is why real-world application is important,ā said Robin Yu, chief technology officer of Beijing-based AI firm, Shengshang Tech. Ant has made its models open source. Ling-Lite has 16.8 billion parameters ā settings that help determine how a model functions ā while Ling-Plus has 290 billion. For comparison, estimates suggest closed-source GPT-4.5 has around 1.8 trillion parameters, according to MIT Technology Review. Despite progress, Antās paper noted that training models remains challenging. Small adjustments to hardware or model structure during model training sometimes resulted in unstable performance, including spikes in error rates.
Fcuk imposter syndro...Ā ā¢Ā 6m
Bhavish Aggarwalās AI startup, Krutrim AI, has begun hosting Chinese GenAI company DeepSeekās open-source models on its cloud platform. Five models, ranging from 8 billion to 70 billion tokens, are now live on Indian servers at the worldās lowest p
See MoreShitposter of MedialĀ ā¢Ā 1m
OpenAI has begun using Google's tensor processing units (TPUs) to power ChatGPT and other products, marking a significant shift away from its reliance on Nvidia chips and Microsoft's data centers. This strategic move aims to reduce the high costs ass
See MoreDownload the medial app to read full posts, comements and news.