You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I downloaded 30GB of mlx-community/Llama-3.3-70B-Instruct-3bit only to find out:
🥲 Failed to load the model
Failed to load model
Error when loading model: ValueError: [quantize] The requested number of bits 3 is not supported. The supported bits are 2, 4 and 8.
LMStudio should know ahead of time that it does not support 3 bit and prevent me to download it, since it will not work.
The text was updated successfully, but these errors were encountered:
🥲 Failed to load the model
Failed to load model
Error when loading model: ValueError: [quantize] The requested number of bits 3 is not supported. The supported bits are 2, 4 and 8.
I downloaded 30GB of
mlx-community/Llama-3.3-70B-Instruct-3bit
only to find out:LMStudio should know ahead of time that it does not support 3 bit and prevent me to download it, since it will not work.
The text was updated successfully, but these errors were encountered: