TAILIEUCHUNG - Parallel Programming: for Multicore and Cluster Systems- P25

Parallel Programming: for Multicore and Cluster Systems- P25: Innovations in hardware architecture, like hyper-threading or multicore processors, mean that parallel computing resources are available for inexpensive desktop computers. In only a few years, many standard software products will be based on concepts of parallel programming implemented on such hardware, and the range of applications will be much broader than that of scientific computing, up to now the main application area for parallel computing | 232 5 Message-Passing Programming Data structures of type MPI_Group cannot be directly accessed by the programmer. But MPI provides operations to obtain information about process groups. The size of a process group can be obtained by calling int MPLfirou size MPI_Group group int size where the size of the group is returned in parameter size. The rank of the calling process in a group can be obtained by calling int MPLGroupxank MPI_Group group int rank where the rank is returned in parameter rank. The function intMPLGrou compare MPI_Group group1 MPI-Group group2 int res can be used to check whether two group representations group1 and group2 describe the same group. The parameter value res is returned if both groups contain the same processes in the same order. The parameter value res MPI_SIMILAR is returned if both groups contain the same processes but group1 uses a different order than group2. The parameter value res means that the two groups contain different processes. The function int MPLGrou free MPI_Group group can be used to free a group representation if it is no longer needed. The group handle is set to MPI_GROUP_NULL. Operations on Communicators A new intra-communicator to a given group of processes can be generated by calling int MPI_Comm comm MPI_Group group MPI_Comm new_comm where comm specifies an existing communicator. The parameter group must specify a process group which is a subset of the process group associated with comm. For a correct execution it is required that all processes of comm perform the call of and that each of these processes specifies the same group argument. As a result of this call each calling process which is a member of group obtains a pointer to the new communicator in . Processes not belonging to group get MPI_COMM_NULL as return value in . MPI also provides functions to get information about communicators. These functions are implemented as local .

TAILIEUCHUNG - Chia sẻ tài liệu không giới hạn
Địa chỉ : 444 Hoang Hoa Tham, Hanoi, Viet Nam
Website : tailieuchung.com
Email : tailieuchung20@gmail.com
Tailieuchung.com là thư viện tài liệu trực tuyến, nơi chia sẽ trao đổi hàng triệu tài liệu như luận văn đồ án, sách, giáo trình, đề thi.
Chúng tôi không chịu trách nhiệm liên quan đến các vấn đề bản quyền nội dung tài liệu được thành viên tự nguyện đăng tải lên, nếu phát hiện thấy tài liệu xấu hoặc tài liệu có bản quyền xin hãy email cho chúng tôi.
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.