[返回网际谈兵首页]·[所有跟帖]·[ 回复本帖 ] ·[热门原创] ·[繁體閱讀]·[版主管理]
美光这类做芯片的肯定不喜欢DeepSeek,油管有人讲了他的感受
送交者: archlute_BU[★品衔R6★] 于 2025-01-22 16:20 已读 723 次 1 赞  

archlute_BU的个人频道

回答: 硅谷AI霸权和神话的破灭开端:浅谈DeepSeek R1的意义 由 nohup 于 2025-01-22 15:19

DeepSeek不需要强大的power: I am running deepseek-r1:14b-qwen-distill-q8_0 variant locally (Ollama on Kubuntu 24.04) on my cheap ASRock DeskMeet X600 PC without dedicated GPU. My AMD Ryzen 5 8600G has 16 TOPS only and a 65 watts power limit. I have 64GB RAM which can be FULLY USED for inference. Inference is slow. Highly complex tasks (prompts) are running up to 5 minutes but even writing a well-structured prompt takes me more time. And the result saves me hours of work. The PC supports up to 128GB RAM, therefore running a 70B model should work perfectly when time is no issue. Due to the low power consumption there are no heat problems. So you trade speed against unlimited model size - for me that is the perfect solution, especially considering that this is a <$1,000 setup). Smaller models run naturally much faster!
喜欢archlute_BU朋友的这个贴子的话, 请点这里投票,“赞”助支持!
[举报反馈]·[ archlute_BU的个人频道 ]·[-->>参与评论回复]·[用户前期主贴]·[手机扫描浏览分享]·[返回网际谈兵首页]
帖子内容是网友自行贴上分享,如果您认为其中内容违规或者侵犯了您的权益,请与我们联系,我们核实后会第一时间删除。

所有跟帖:        ( 主贴楼主有权删除不文明回复,拉黑不受欢迎的用户 )


用户名:密码:[--注册ID--]

标 题:

粗体 斜体 下划线 居中 插入图片插入图片 插入Flash插入Flash动画


     图片上传  Youtube代码器  预览辅助

手机扫描进入,浏览分享更畅快!

楼主本栏目热帖推荐:

>>>>查看更多楼主社区动态...






[ 留园条例 ] [ 广告服务 ] [ 联系我们 ] [ 个人帐户 ] [ 版主申请 ] [ Contact us ]