Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ
Tải xuống
By performing this join in-network, REED can dramati- cally reduce the communications burden on the network topology, especially when there are relatively few satisfy- ing tuples, as is typically the case when identifying failures in condition-based monitoring or process compliance ap- plications. Reducing communication in this way is particu- larly important in many industrial scenarios when relatively high data rate sampling (e.g., 100’s of Hertz) is required to perform the requisite monitoring [10]. Table 1 shows an example of the kinds of tables which we expect to transmit – in this case, the filtration predicates vary with time, and include conditions. | Crom Faster Web Browsing Using Speculative Execution James Mickens Jeremy Elson Jon Howell and Jay Lorch Microsoft Research mickens jelson jonh lorch@microsoft.com Abstract Early web content was expressed statically making it amenable to straightforward prefetching to reduce user-perceived network delay. In contrast today s rich web applications often hide content behind JavaScript event handlers confounding static prefetching techniques. Sophisticated applications use custom code to prefetch data and do other anticipatory processing but these custom solutions are costly to develop and application-specific. This paper introduces Crom a generic JavaScript speculation engine that greatly simplifies the task of writing low-latency rich web applications. Crom takes preexisting non-speculative event handlers and creates speculative versions running them in a cloned browser context. If the user generates a speculated-upon event Crom commits the precomputed result to the real browser context. Since Crom is written in JavaScript it runs on unmodified client browsers. Using experiments with speculative versions of real applications we show that pre-commit speculation overhead easily fits within user think time. We also show that speculatively fetching page data and precomputing its layout can make subsequent page loads an order of magnitude faster. 1 Introduction With the advent of web browsing humans began a new era of waiting for slow networks. To reduce user-perceived download latencies researchers devised ways for browsers to prefetch content and hide the fetch delay within users think time 4 15 17 20 23 . Finding prefetchable objects was straightforward because the early web was essentially a graph of static objects stitched together by declarative links. To discover prefetchable data one merely had to traverse these links. In the web s second decade static content graphs have been steadily replaced by rich Internet applications RIAs that mimic the interactivity of .