▣ Lecture outline
|Traditional speedup curve of Amdahl's law no longer applies to computer system performance. All the recent high-performance designs of Intel, IBM, and Sun rely on multi-core technology. This technical shift from ILP (instruction-level parallelism) to TLP (thread-level parallelism) will reshape the design of future microprocessors. In this course, we will cover both ILP and TLP techniques. The topics we cover include adaptive dynamic branch prediction, high-bandwidth instruction fetch, dynamic scheduling, multiple issue, speculation, multithreading, symmetric multiprocessors, distributed shared memory multiprocessors, synchronization and consistency, and cache and memory hierarchy designs.|
▣ Professor : Lynn Choi( firstname.lastname@example.org, Engineering Bldg, #411, 3290-3249)
▣ Assistant : WonJoon Son(email@example.com, Engineering Bldg, #236, 3290-3896)
▣ Time(Place) : Wednesday(1-2) WooJung information & communications Building #B103
▣ Textbook : "Computer Architecture: A Quantitative Approach", John L. Hennessy and David A. Patterson, Morgan Kaufmann, 5th Edition, 2012
▣ Reference book : A Collection of Research Papers
▣ Bulitin Board : http://it.korea.ac.kr/engine/index.php?mid=class_notice
▣ Class notice
1. Lecture Note 1 was updated on March 5.
2. Lecture Note 2 was updated on March 13.
3. Reading List was updated on March 20.
4. Lecture Note 3 was updated on March 26.
5. Lecture Note 4 was updated on April 2.
6. Lecture Note 5 was updated on April 10.
7. Lecture Note 6 was updated on April 30.
8. Lecture Note 7 was updated on April 30.
9. Lecture Note 8 was updated on May 8.
▣ Lecture slide
▣ Paper Presentation
2. Eyeriss: A Spatial Architexture for Energy-Efficient Dataflow for Convolutional Neural Networks (ISCA, 2016)
2. Charge Cache : Reducing DRAM Latency by Exploiting Row Access Loocality (HPCA 2016)
2. E-RNN: Design Optimization for Efficient Recurrent Neural Networks in FPGA (HPCA, 2019)
2. A Configurable Cloud-Scale DNN Processor for Real-Time AI (ISCA, 2018)
김채영 - 1. Cambricon-S: Addressing Irregularity in Sparse Neural Network through A Cooperative Software/Hardware
Approach (MICRO, 2018)
2. ComPEND: Computation Pruning through Early Negative Detection for ReLU in a Deep Neural Network
Accelerator (ICS, 2018)
황병진 - 1. A Many-core Architecture for In- Memory Data Processing (Micro, 2017)
2. Architectual Support for Probabilists Branches (Micro, 2018)
▣ Reading List