50% found this document useful (2 votes)
479 views6 pages

MAP UNIT 4 MCQ

This document provides 30 multiple choice questions and answers about distributed memory programming with MPI. It covers topics such as MPI processes, the MPI execution command, MPI functions like MPI_Init and MPI_Finalize, MPI datatypes, MPI communication modes, collective communication functions, and data partitioning techniques in MPI.

Uploaded by

GEO MERIN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
50% found this document useful (2 votes)
479 views6 pages

MAP UNIT 4 MCQ

This document provides 30 multiple choice questions and answers about distributed memory programming with MPI. It covers topics such as MPI processes, the MPI execution command, MPI functions like MPI_Init and MPI_Finalize, MPI datatypes, MPI communication modes, collective communication functions, and data partitioning techniques in MPI.

Uploaded by

GEO MERIN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

UNIT 4

DISTRIBUTED MEMORY PROGRAMMING WITH MPI

1) program running on one core memory pair is called


a)function
b)process
c)passing
d) partitioning
Ans:b

2)In execution command to run the program


a)$mpiexec -a<number of processes>./mpi
b)$mpiexec -n<number of processes>./mpi hello
c)$mpiexec -a<number of processes>./mpi hello
d)$mpiexec -n<number of processes>./mpi hello
Ans:d

3)mpi.h header file contains


a)prototypes of MPI function
b)macro definitions
c)Type definitions
d)all of the above
Ans:d

4)MPI constructs are


a)MPI_Init
b)MPI_Finalize
c)MPI_rank
d)MPI_world
Ans:both a and b

5)MPI_Comm_size returns
a)calling process
b)returns no of process
c)returns offset
d)returns error code
Ans:b

6) other name of message passing interface?


a)Collective communication
b)Collective networks
c)Collective interface
d)None of the above
Ans: (a)

7)tag is used to what


a)Distinguish message
b)Identify characters
c)Identify knowledge
d)None of the above
Ans: (a)

8)which is the c datatype for mpi char datatype?


a)Signed char
b)Signed int
c)Signed float
d)Unsigned int
Ans: (a)

9)what is a collection of cores connected to a globally accessible memory??


a)Shared memory system
b)Distributed memory system
c)Cpu
d)Memory
Ans: (a)

10)what is a collection of core memory pairs connected by a network?


a)Shared memory system
b)Distributed memory system
c)Cpu
d)Mouse
Ans: (b)
11)write an example for not an mpi predefined datatype?
a)Mpi char
b)Mpi short
c)Mpi int
d)Char
Ans: (d)

12)what is the returns of seconds that have elapsed since some time in the past?
a)Elapsed parallel time
b)Elapsed collision time
c)Iq time
d)None of the above
Ans: (a)

13)elapsed serial time is calculated in what


a)Milliseconds
b)Microseconds
c)Nano seconds
d)Minute
Ans: (b)

14) what is the meaning of mpl-MAX


a)Maximum
b)Minimum
c)Average
d)Most maximum
Ans: (a)

15)what is the meaning for MPl - BoR


a)Bitwise or
b)Bitwise and
c)Bitwise
d)Sum
Ans: (a)

16)specify the types of communication modes?


a)Blocking and non blocking mode
b)Blocking mode
c)Unblocked mode
d)Anything
Ans: (a)

17)what involve communication among all processes in a process group


a)Collective communication
b)Non collective communication
c)Communication
d)None of above
Ans: (a)

18)expand mpi?
a)Message passing interface
b)Message process interface
c)Message passing identity
d)None of above
Ans: (a)

19)_____release the resources allocated for MPI.


a) MPI_finalize
b)MPI_init
c)MPI_addresss
ANS: (a)

20)what is the syntax for MPI_finalize?


a) int MPI_Finalize(void)
b)int MPI_Finalize()
c) MPI_Finalize(void)
ANS: (a)

21)MPI_comm_rank returns
a)process rank
b)calling process rank
c)no of process rank
ANS: (b)
22)which of the following is not an MPI library?
a) MPI_abort
b)MPI_comm_Free
c)MPI_comm_world
ANS: (c)

23)in ____ mode, program will not continue until the communication is
completed.
a)non-blocking
b)blocking
c)ideal
ANS: (b)

24)_________ sends data from all processes to all processes.


a)MPI_allreduce
b)MPI_allgather
c)MPI_gather
ANS: (b)

25) ________function is used to delete the cached attribute.


a)MPI_cancel
b)MPI_attr_delete
c)MPI_delete
ANS: (b)

26) c datatype of MPI LONG LONG


a) signed long long int
b)long long int
c)unsigned long long int
ANS: (a)

27)what is the type argument of MPI_send?


a)MPI_CHAR
b)MPI_INT
c)MPI_LONG
ANS: (a)
28) __________assigns the components in round robin fashion.
a)block partition
b)cyclic partition
c)block-cyclic partition
ANS: (b)

29)block partition assigns block of __________ components to each process.


a)local_n consecutive
b)n consecutive
c) consecutive
ANS: (a)

30)c datatype of mpi unsigned


a)unsigned int
b)unsigned char
c)unsigned long
ANS: (a)

You might also like