You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By default, tensorflow 1.14 alloctates whole RAM in GPU/s even though they are not needed in run-time. For ranker task, in this library, if indexing lib that uses GPU (for instance through faiss-gpu) is to be deployed, memory problem like below is faced.
RuntimeError: Error in void faiss::gpu::allocMemorySpaceV(faiss::gpu::MemorySpace, void**, size_t) at gpu/utils/MemorySpace.cpp:26: Error: 'err == cudaSuccess' failed: failed to cudaMalloc 1610612736 bytes (error 2 out of memory)
I have added config.gpu_options.per_process_gpu_memory_fraction = 0.7 configuration for session creation at algorithm/base.py. By doing this, RAM usage of GPU/s is explicitely limited during training and 0.3 of them are reserved for indexing. Of course, there are draw-backs, is there another path to go you think?
By default, tensorflow 1.14 alloctates whole RAM in GPU/s even though they are not needed in run-time. For ranker task, in this library, if indexing lib that uses GPU (for instance through faiss-gpu) is to be deployed, memory problem like below is faced.
RuntimeError: Error in void faiss::gpu::allocMemorySpaceV(faiss::gpu::MemorySpace, void**, size_t) at gpu/utils/MemorySpace.cpp:26: Error: 'err == cudaSuccess' failed: failed to cudaMalloc 1610612736 bytes (error 2 out of memory)
I have added
config.gpu_options.per_process_gpu_memory_fraction = 0.7
configuration for session creation atalgorithm/base.py
. By doing this, RAM usage of GPU/s is explicitely limited during training and 0.3 of them are reserved for indexing. Of course, there are draw-backs, is there another path to go you think?The text was updated successfully, but these errors were encountered: