7076 – Efficient Sparse Convolution for Deep Learning

Machine learning allows computers to learn from new data and make predictions or decisions that arent explicitly programmed. This concept has broad implications for a myriad of applications; from launching entirely new technological developments to helping identify profitable opportunities for companies. Some of the areas that machine learning algorithms are currently being used include: Fraud detection Speech recognition Self-driving cars Stock market predictions Advertising and market analysis Big data analysis Deep learning is a type of machine learning that uses multiple processing layers to analyze data input. Image processing, for instance facial recognition or pedestrian identification in self-driving cars, is a common problem where deep learning is commonly used. Complex systems and algorithms have been developed to specifically benefit deep learning and image processing tasks. The result is whats called artificial neural networks which emulate how the mammalian brain might handle the same task by creating mathematical neurons and multiple processing layers and connections between those neurons for complex data analysis. While powerful, the problem is that deep learning algorithms are computationally intensive. Current state-of-the-art processors, such as high-end GPUs, are being used for deep learning tasks but werent designed for this application. As a result deep learning has historically required multiple processors and dedicated systems for complex data problems. This issue is compounded with additional processing layers, often required for large and complex data, which also increases the computational and memory demands. New circuits are needed that are specifically designed to handle deep learning tasks with better efficiency and performance than currently used processor units. New deep learning CMOS circuit for high-performance energy-efficient machine learning tasks The technology is deep learning CMOS neuromorphic processor capable of handling deep learning tasks with 3.3x better energy efficiency and 15.6x better area efficiency than other deep learning processing units. The chip was tested with the Caltech 101 dataset for face identification and achieved over 89% classification accuracy. The circuit has been specifically designed to better handle elements common to most big data sets and problems, allowing for higher efficiency and better performance than currently used hardware. Joohee Kim jooheek@umich.edu (734) 647-5730

Related Blog

Smart, interactive desk

Get ready to take your space management game to the next level with the University of Glasgow’s innovative project! By combining the

Mechanical Hamstring™

University of Delaware Technology Overview This device was created to allow athletes who suffer a hamstring strain to return to the field

Join Our Newsletter

                                                   Receive Innovation Updates, New Listing Highlights And More