学术报告
我的位置在: 金沙7727赌城网站 > 学术报告 > 正文
Towards Fast Training of Deep Neural Networks
浏览次数:日期:2020-11-05编辑:信科院 科研办

报告人:美国佛罗里达大学 李涛教授IEEE Fellow

报告时间:20201022 上午900

报告地点:腾讯会议 会议ID:9 6 2 4 9 0 5 1 8


报告摘要:Today’s big and fast data and the changing circumstance require fast training of Deep Neural Networks (DNN) in various applications. However, training a DNN with tons of parameters involves intensive computation. Enlightened by the fact that redundancy exists in DNNs and the observation that the ranking of the significance of the weights changes slightly during training, we propose Eager Pruning, which speeds up DNN training by moving pruning to an early stage. Eager Pruning is supported by an algorithm and architecture co-design. The proposed algorithm dictates the architecture to identify and prune insignificant

weights during training without accuracy loss. A novel architecture is designed to transform the reduced training computation into performance improvement. Our proposed Eager Pruning system gains an average of 1.91x speedup over state-of-the-art hardware accelerator and 6.31x energy-efficiency over Nvidia GPUs.

 

报告人简介:李涛教授是IEEE Transactions on Computers的副主编(AEIC),也是IEEE Fellow。他在2015-2017年期间担任美国国家科学基金会(NSF)的CISE计划主任。他获得了2009年美国国家科学基金会基金会的早期职业奖;2008年,2007年,2006年IBM教授奖;2008年Microsoft研究安全和可扩展多核计算奖以及2006年Microsoft研究可信赖计算课程奖。李涛博士与他人合了两篇论文,分别获得了ICCD 2016,HPCA 2011最佳论文奖。李涛教授有七篇论文被HPCA 2018,HPCA 2017,ICPP 2015,CGO 2014,DSN 2011,MICRO 2008和MASCOTS2006提名为最佳论文奖。他被列入HPCA名人堂和MICRO名人堂。他也获得了佛罗里达大学工程学院2013-2014年和2011-2012年博士论文顾问/指导奖。

 

邀请人:李肯立

 

联系人:刘楚波