Skip to content

testing different multi-gpu parallelism strategies

Notifications You must be signed in to change notification settings

seankhl/multigputests

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

What This Is

This is a toy simulation code for testing different ways of achieving multi GPU parallelism. There is a GPUDirect version (native CUDA) and an MPI version (CUDA-aware MPI). Feasibly we could look at solutions that incorporate thrust/nccl in the future. We could also look at a native CUDA version that uses remote memory access instead of GPU<->GPU communications.

To run the mpi code, execute the following command:

/usr/local/mpi-cuda/bin/mpirun -np 1 --mca btl openib,self mpi_test 

Helpful Links

Link to version of Open MPI used:

How to build Open MPI with CUDA-Aware support:

NVIDIA docs about CUDA-Aware MPI:

Other helpful/interesting links:

Links to info about multi GPU programming:

Helpful CUDA wrappers for future reference:

How to run CUDA samples (there is a multi GPU sample):

About

testing different multi-gpu parallelism strategies

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published