Skip to content

Latest commit

 

History

History
65 lines (49 loc) · 3.43 KB

File metadata and controls

65 lines (49 loc) · 3.43 KB
layout page
title Syllabus

Learning Machines

  • Instructor: Tony Nowatzki
  • Term: Spring 2026
  • Textbook: None!

Course Objectives

The advancements and overwhelming success of Machine Learning has profoundly affected the future of computer architecture. Not only is performing learning on big-data the leading application driver for future architectures, but also machine learning techniques can be used to improve hardware efficiency for a wide variety of application domains.

This course will explore, from a computer architecture perspective, the principles of hardware/software codesign for machine learning. One thrust of the course will delve into accelerator, CPU, and GPU enhancements for ML algorithms, including parallelization techniques. The other thrust of the course will focus on how machine learning can be used to optimize conventional architectures by dynamically learning and adapting to program behavior.

Several important specific goals are:

  • Develop skills in domain-specialization (reason about how application/domain properties can be exploited with hardware mechanisms)
  • Gain understanding of the current state of the art within acceleration for machine learning, both in academia and in industry.

Also, there are some general goals which hold for any architecture/hardware course:

  • Gain intuition and reasoning skills regarding fundamental architecture tradeoffs of hardware design choices (performance/area/power/complexity/generality).
  • Understand microarchitecture techniques for extracting parallelism and exploiting locality.
  • Learn about evaluation methods, including simulation, analytical modeling, and mechanistic models.

Course Components:

Logistically, this course has 4 components.

  • Participation: Online forum discussion: During this course we will read a number of research papers from literature. We will discuss these in canvas's online discussion forum. See the discussion page for more instructions.

  • Participation: In-class discussion: We will also discuss papers in class, and one way you can earn participation points is to ask questions or provide your thoughts either audibly, or in the live chat during class. Remember, this class will be fun and interesting if you make it so!

  • Mini-Projects: There will be two mini projects which can also be performed in groups:

    • Parallelizing a machine learning kernel using CUDA on our Titan-V GPU. (or your own)
    • Build a ML-accelerator simulator, which is correct and produces accurate performance estimates.
  • Leading Class Discussion: Groups of students (2-3 students?) will lead 1 lecture/discussion (group does not need to be the same between project/class discussion)

  • Project: Group based research/implementation project with 1-4 students. Please see the project handout, and feel free to use Piazza to help form groups. You will need to propose a project by the beginning of the 5th week of class, so please start thinking early. See project page for more details.

Grade Breakdown

  • 20% Discussion
    • Most students earn full points by participating meaningfully in each online forum discussion
    • I will award bonus points for consistent participation in in-class discussions
      • If you participate in every in-class discusison, that is also worth full discussion points
  • 15% Mini-projects
    • 7.5% Each
  • 25% Leading Class Discussion
  • 40% Project