TAILIEUCHUNG - Reinventing Scheduling for Multicore Systems

The current RSS polling mechanism has been said to scale well because “its cost is almost directly propor- tional to the number of subscribers” [5]. In fact, linear cost is typically an indicator of poor scaling properties, especially when that cost is focused on one member of a distributed system. It is likely that the further growth of RSS adoption will be badly stunted, without substantial change to the way micronews is distributed. The proposed FeedTree subscription system for RSS takes advantage of the properties of peer-to-peer event notification to address the bandwidth problem suffered by Web content providers, while at the same time bring- ing micronews to end users even more. | Reinventing Scheduling for Multicore Systems Silas Boyd-Wickizer Robert Morris M. Frans Kaashoek MIT Abstract High performance on multicore processors requires that schedulers be reinvented. Traditional schedulers focus on keeping execution units busy by assigning each core a thread to run. Schedulers ought to focus however on high utilization of on-chip memory rather than of execution cores to reduce the impact of expensive DRAM and remote cache accesses. A challenge in achieving good use of on-chip memory is that the memory is split up among the cores in the form of many small caches. This paper argues for a form of scheduling that assigns each object and its operations to a specific core moving a thread among the cores as it uses different objects. 1 Introduction As the number of cores per chip grows compute cycles will continue to grow relatively more plentiful than access to off-chip memory. To achieve good performance applications will need to make efficient use of on-chip memory 11 . On-chip memory is likely to continue to come in the form of many small caches associated with individual cores. A central challenge will be managing these caches to avoid off-chip memory accesses. This paper argues that the solution requires a new approach to scheduling one that focuses on assigning data objects to cores caches rather than on assigning threads to cores. Schedulers in today s operating systems have the primary goal of keeping all cores busy executing some runnable thread. Use of on-chip memory is not explicitly scheduled a thread s use of some data implicitly moves the data to the local core s cache. This implicit scheduling of on-chip memory often works well but can be inefficient for read write data shared among multiple threads or for data that is too large to fit in one core s cache. For shared read write data cache-coherence messages which ensure that reads see the latest writes can saturate system interconnects for some workloads. For large data sets the .

Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.