mCUDA-MEME is a well-established ultrafast scalable motif discovery algorithm based on MEME (version 4.4.0) algorithm for multiple GPUs using a hybrid combination of CUDA, MPI and OpenMP parallel programming models. This algorithm is a further extension of CUDA-MEME (based on MEME version 3.5.4) with respect to accuracy and speed and has been tested on a GPU cluster with eight compute nodes and two Fermi-based Tesla S2050 (and Tesla-based Tesla S1070) quad-GPU computing systems, running the Linux OS with the MPICH2 library. The experimental results showed that our algorithm scales well with respect to both dataset sizes and the number of GPUs. At present, OOPS and ZOOPS models are supported, which are sufficient for most motif discovery applications. In addtion, this algorithm has been incorporated to NVIDIA Tesla Bio Workbench and deployed in NIH Biowulf.

Project Activity

See All Activity >

Follow CUDA-MEME

CUDA-MEME Web Site

Other Useful Business Software
Deploy Apps in Seconds with Cloud Run Icon
Deploy Apps in Seconds with Cloud Run

Host and run your applications without the need to manage infrastructure. Scales up from and down to zero automatically.

Cloud Run is the fastest way to deploy containerized apps. Push your code in Go, Python, Node.js, Java, or any language and Cloud Run builds and deploys it automatically. Get fast autoscaling, pay only when your code runs, and skip the infrastructure headaches. Two million requests free per month. And new customers get $300 in free credit.
Try Cloud Run Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of CUDA-MEME!

Additional Project Details

Registered

2017-05-19