GPU not detected with Tensorflow

I have taken a Laptop with NVIDIA RTX 5080 but Tensorflow code does not recognize the GPU when running code : import tensorflow as tf
print(“TensorFlow version:”, tf.version)
print(“Num GPUs Available:”, len(tf.config.list_physical_devices(‘GPU’))). I have installed CUDA 11.8 and cuDNN 8.6 with all the recommended steps. I have upgraded the Driver also as per the recommendation from Support executive from NVIDIA.

I still continue to face the same issue and need help with this so that I can run both TF (2.10) and Pytorch without any hassle.

OS : Windows 11
Primary objective : Run Deep learning models on Images/Videos using TF and Pytorch

Currently it is detecting GPU when running the following code : import tensorflow as tf
print(“TensorFlow version:”, tf.version)
print(“Num GPUs Available:”, len(tf.config.list_physical_devices(‘GPU’)))

Output : TensorFlow version: 2.10.0
Num GPUs Available: 1
The benchmark code executions are still lagging behind the number reported on the web.
Code : import tensorflow as tf
import time
import os

Ensure TensorFlow uses the GPU

physical_devices = tf.config.list_physical_devices(‘GPU’)
if physical_devices:
tf.config.experimental.set_memory_growth(physical_devices[0], True)
else:
print(“No GPU found.”)
exit()

Set parameters

size = 4096 # Size of the matrix (NxN)
iterations = 10

print(f"TensorFlow version: {tf.version}")
print(“Num GPUs Available:”, len(physical_devices))
print(“Device Name:”, tf.config.experimental.get_device_details(physical_devices[0])[‘device_name’])

Warm-up

a = tf.random.normal([size, size])
b = tf.random.normal([size, size])
c = tf.matmul(a, b)
tf.experimental.numpy.experimental_enable_numpy_behavior() # Optional

Benchmark

times =
for i in range(iterations):
a = tf.random.normal([size, size])
b = tf.random.normal([size, size])
start_time = time.time()
c = tf.matmul(a, b)
_ = c.numpy() # Force execution
end_time = time.time()
times.append(end_time - start_time)
print(f"Iteration {i+1}: {times[-1]:.4f} seconds")

avg_time = sum(times) / iterations
print(f"\nAverage execution time over {iterations} iterations: {avg_time:.4f} seconds")

Output : TensorFlow version: 2.10.0
Num GPUs Available: 1
Device Name: NVIDIA GeForce RTX 5080 Laptop GPU
Iteration 1: 0.0750 seconds
Iteration 2: 0.0635 seconds
Iteration 3: 0.0655 seconds
Iteration 4: 0.0580 seconds
Iteration 5: 0.0555 seconds
Iteration 6: 0.0560 seconds
Iteration 7: 0.0565 seconds
Iteration 8: 0.0560 seconds
Iteration 9: 0.0545 seconds
Iteration 10: 0.0561 seconds

Average execution time over 10 iterations: 0.0597 seconds

Please let me know how to use this RTX 5080 GPU to its full capacity