请问这个该如何解决

[节点运行异常: [对话]] { [cause]: [oae [Error]: 500 “error swapping process group: could not find real modelID for hf-mirror.com/wszgrcy/chinese-text-correction-1.5b:F16” at t.generate (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:478:8607) at zn.makeStatusError (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:482:9840) at zn.makeRequest (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:482:10919) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async #e (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:1421:9503) at async L0.stream (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:1421:10170) at async Object.o [as stream] (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:2001:1108) at async Ffe.run (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:2361:1256) at async #s (file:///e:/ShengHuaBi/reso…

请问您是用的哪种方式调用的大语言模型?
ollama/llama.cpp/openai兼容的接口?
目前文本纠错用的这个模型是本地调用,只有ollama/llama.cpp可以使用
其中ollama应该可以自动下载
llama.cpp的话可以按照以下方法添加

填完这两个配置后点击下载按钮,下载完成后model字段会自动更新上去,然后保存即可

我正在尝试更换ollama,转而使用llama.cpp
卸载重装以后操作一遍,已经可以使用。
感谢。
另外我想请问一下,只用纠错功能,是否可以调用hf里的7b模型?
1.5b模型有些慢了。

可以,看这里

有不同的模型和量化尺寸,稍微改下后缀就行

我尝试切换7b模型,就会出现这样的错误。

[节点运行异常: [对话]] { [cause]: [oae [Error]: 503 Process can not ProxyRequest, state is failed at t.generate (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:478:8607) at zn.makeStatusError (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:482:9840) at zn.makeRequest (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:482:10919) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async #e (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:1421:9503) at async L0.stream (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:1421:10170) at async Object.o [as stream] (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:2001:1108) at async Ffe.run (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:2361:1256) at async #s (file:///e:/ShengHuaBi/resources/app/extensions/shenghuabi/index.mjs:2372:2465) at async dl.ex…


看下点击llama.cpp的状态栏,看下里面的日志
模型下载完成了吗?

模型下载完成了的。
image

llama.cpp加载本地模型失败,日志如下:

爆显存了,您的显存多大
不能超过这个

如果是核显的话可以这么调整

或者你换个小点的,比如q8,q4之类

好的,谢谢,我再试试