Strategies for Increasing AI Efficiency - Insights from the Cisco Research Efficient AI Summit
As AI models become more accurate, they are becoming much larger, requiring a significant amount of computing power to run. How can we make the future of AI more scalable and sustainable?
Cisco Research hosted a virtual summit on efficient AI, bringing together researchers to explore efficient AI challenges and discuss opportunities for solving those challenges now and into the future.
The Cisco Research team has been working on efficient AI initiatives for several years, contributing research papers and incorporating their work into an open-source project called ModelSmith.
The summit includes presentations from university professors collaborating with the Cisco Research team, including Elisa Ricci (University of Trento), Sijia Liu (Michigan State), Yung-hsiang Lu (Purdue), and Ling Liu (GA Tech).
Outshift is Cisco’s incubation engine, innovating what's next and new for Cisco products and sharing our expertise on emerging technologies. Discover the latest on cloud native applications, cloud application security, generative AI, quantum networking and security, future-forward tech research, our latest open source projects and more.
Keep up with the speed of innovation:
→ Learn more: http://cs.co/6051uzKYc
→ Read our blog: http://cs.co/6052uzKYY
Connect with us on social media:
→ LinkedIn: http://cs.co/6053uzKYl
→ Twitter / X: http://cs.co/6054uzKYm
→ Subscribe to our YouTube channel: @OutshiftbyCisco
Timestamps:
00:00 Welcome and Introduction to the Summit
12:30 “Efficient Learning Over Multiple Tasks” with Elisa Ricci
43:30 “Zeroth-Order Optimization for Memory-Efficient Fine-Tuning of LLMs” with Sijia Liu
1:15:23 “Computer Vision for Edge Devices” with Yung-Hsiang Lu
1:40:24 “Resource Efficient Fine Tuning of Pre-Trained Large Language/Vision Models” with Ling Liu
2:15:13 Panel: Roundtable of Questions