VP and GM, Artificial Intelligence and Analytics (AIA) at Intel
Dr. Wei Li is the Vice President and General Manager of Artificial Intelligence and Analytics (AIA) at Intel, responsible for developing AI software and co-designing AI hardware. The computational engine for AI is the combined hardware and software. It serves as the bridge from Data to Insights for AI Everywhere. He leads his world-wide team of engineering “magicians” to accelerate time to solutions by improving machine performance and developer productivity. Since starting his career as a computer scientist for supercomputers, he has been a software technologist and leader with Intel for more than 20 years, always at the cutting-edge of growth, helping the company transform from PC-only, to data center and cloud, and now AI.
With a passion for technology, strategy, and execution, he and his team have been instrumental in Intel’s recent multi-billion-dollar AI revenue growth through hardware and software. His team improved AI performance by 10-100X through software acceleration, and enabled customers such as MasterCard, Burger King, AWS, Google among many others. He is responsible for a broad AI and Analytics including deep learning, statistical machine learning and big data analytics, e.g. optimizing mainstream AI software such as TensorFlow, PyTorch, MXNet, PaddlePaddle, TVM, XGBoost, Scikit-learn, DGL, Numpy, SciPy, Pandas, Modin, numba, Python, SPARK as well as developing Intel oneAPI performance libraries and productivity tools such as oneDNN, oneDNN Graph, oneCCL, oneDAL, BigDL, Intel Neural Compressor (INC) and AI Analytics toolkit. His team is also responsible for AI hardware co-design on CPU and GPU as well as heterogenous and distributed computing.
Wei Li received his Ph.D. in Computer Science from Cornell University, and taught Advanced Compiling Techniques at Stanford University. He served as an associate editor for ACM Transactions on Programming Languages and Systems, and was on the Intel/Microsoft committee that funded the creation of parallel computing research centers at the University of California-Berkeley and the University of Illinois at Urbana-Champaign.
Watch live: March 7, 2023 @ 4:15 – 4:45 pm ET
Inside Stable Diffusion on CPU: Software and Hardware for AI Everywhere
We have a great opportunity to democratize AI. The challenge is that AI is very compute intensive, which often requires additional specialized AI accelerators. However, CPU is ubiquitous. With the recent built-in acceleration for AI in mainstream CPU, we have a breakthrough in making AI widely available. In this talk, we will show an amazing application, Stable Diffusion, on Intel’s latest CPU, and take a peek inside the software and hardware that make this possible.