[Error: {“status”:500,“text”:“INTERNAL SERVER ERROR”}
at c:\Users\26002\AppData\Local\Programs\ShengHuaBi\resources\app\extensions\shenghuabi\index.js:2941:1378
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async xG.loadModel (c:\Users\26002\AppData\Local\Programs\ShengHuaBi\resources\app\extensions\shenghuabi\index.js:2941:1130)]
2025-04-04 03:21:02.045 [info] ����
�������
- Serving Flask app ‘app’
- Debug mode: off
2025-04-04 03:21:02.053 [error] WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
- Running on http://127.0.0.1:9900
2025-04-04 03:21:02.053 [info] 运行中
2025-04-04 03:21:02.056 [error] Press CTRL+C to quit
2025-04-04 03:21:02.979 [error] d:\Knowledge Graph\python-addon\env\Lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: clean_up_tokenization_spaces was not set. It will be set to True by default. This behavior will be depracted in transformers v4.45, and will be then set to False by default. For more details check this issue: [BUG] GPT-2 tokenizer is NOT invertible · Issue #31884 · huggingface/transformers · GitHub
warnings.warn(
2025-04-04 03:21:05.178 [error] [2025-04-04 03:21:05,176] ERROR in app: Exception on /errorCorrection/loadModel [POST]
Traceback (most recent call last):
File “d:\Knowledge Graph\python-addon\env\Lib\site-packages\flask\app.py”, line 1473, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “d:\Knowledge Graph\python-addon\env\Lib\site-packages\flask\app.py”, line 882, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “d:\Knowledge Graph\python-addon\env\Lib\site-packages\flask\app.py”, line 880, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File “d:\Knowledge Graph\python-addon\env\Lib\site-packages\flask\app.py”, line 865, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “app.py”, line 36, in errorCorrectionLoadModel
File “error_correction/error_correction.py”, line 110, in loadModel
File “error_correction/error_correction.py”, line 50, in init
File “d:\Knowledge Graph\python-addon\env\Lib\site-packages\pycorrector\macbert\macbert_corrector.py”, line 30, in init
self.model.to(device)
File “d:\Knowledge Graph\python-addon\env\Lib\site-packages\transformers\modeling_utils.py”, line 2905, in to
return super().to(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “d:\Knowledge Graph\python-addon\env\Lib\site-packages\torch\nn\modules\module.py”, line 1174, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File “d:\Knowledge Graph\python-addon\env\Lib\site-packages\torch\nn\modules\module.py”, line 780, in _apply
module._apply(fn)
File “d:\Knowledge Graph\python-addon\env\Lib\site-packages\torch\nn\modules\module.py”, line 780, in _apply
module._apply(fn)
File “d:\Knowledge Graph\python-addon\env\Lib\site-packages\torch\nn\modules\module.py”, line 854, in _apply
self._buffers[key] = fn(buf)
^^^^^^^
File “d:\Knowledge Graph\python-addon\env\Lib\site-packages\torch\nn\modules\module.py”, line 1160, in convert
return t.to(
^^^^^
UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xb6 in position 2: invalid start byte
2025-04-04 03:21:05.185 [error] 127.0.0.1 - - [04/Apr/2025 03:21:05] “POST /errorCorrection/loadModel HTTP/1.1” 500 -
暂时不清楚怎么回事,我会继续排查
这个是一出来就这样,还是等了一会这样?
d:\Knowledge Graph\python-addon这个文件夹下面有个cache您可以删除它,然后再执行下试试
不保证可以,仅仅尝试