1.98.33

下载

  • 增加心跳超时重置下载

之前有低速超时重置,但是仅仅在正常下载时速度降低有用
心跳超时表示上一次请求10s内,如果没有再次接受到请求,那么就标记超时重启下载

ollama

神速,完全跟不上作者的速度。
话说这个Intel,ollama,怎么安装使用?

已有文档

安装完毕,8b还是不快,没cmd丝滑,不知道问题出在哪里?晕:man_facepalming:

复制下ollama的日志,左下角点击

2025-05-06 22:57:57.719 [info] 已安装
2025-05-06 22:57:57.719 [info] 准备启动
2025-05-06 22:57:59.568 [info] 2025/05/06 22:57:59 routes.go:1230: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:1000 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\zhu\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"

2025-05-06 22:57:59.568 [info] 运行中
2025-05-06 22:57:59.573 [info] time=2025-05-06T22:57:59.572+08:00 level=INFO source=images.go:432 msg="total blobs: 26"

2025-05-06 22:57:59.574 [info] time=2025-05-06T22:57:59.573+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"

2025-05-06 22:57:59.575 [info] [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:	export GIN_MODE=release
 - using code:	gin.SetMode(gin.ReleaseMode)


2025-05-06 22:57:59.575 [info] [GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func3 (5 handlers)
[GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func4 (5 handlers)
[GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)

2025-05-06 22:57:59.575 [info] time=2025-05-06T22:57:59.574+08:00 level=INFO source=routes.go:1297 msg="Listening on 127.0.0.1:11434 (version 0.6.2-intel-ollama-20250429)"
time=2025-05-06T22:57:59.574+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"

2025-05-06 22:57:59.575 [info] time=2025-05-06T22:57:59.575+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-05-06T22:57:59.575+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2025-05-06T22:57:59.575+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=16 efficiency=10 threads=16

2025-05-06 22:57:59.580 [info] time=2025-05-06T22:57:59.577+08:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-05-06T22:57:59.577+08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="31.4 GiB" available="20.6 GiB"

2025-05-06 22:58:39.081 [info] time=2025-05-06T22:58:39.081+08:00 level=INFO source=server.go:107 msg="system memory" total="31.4 GiB" free="20.7 GiB" free_swap="18.3 GiB"
time=2025-05-06T22:58:39.081+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen3.vision.block_count default=0

2025-05-06 22:58:39.081 [info] time=2025-05-06T22:58:39.081+08:00 level=INFO source=server.go:154 msg=offload library=cpu layers.requested=-1 layers.model=37 layers.offload=0 layers.split="" memory.available="[20.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.6 GiB" memory.required.partial="0 B" memory.required.kv="1.1 GiB" memory.required.allocations="[6.6 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="4.1 GiB" memory.weights.nonrepeating="486.9 MiB" memory.graph.full="768.0 MiB" memory.graph.partial="768.0 MiB"

2025-05-06 22:58:39.222 [info] get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory

2025-05-06 22:58:39.222 [info] llama_model_load_from_file_impl: using device SYCL0 (Intel(R) Arc(TM) Graphics) - 16762 MiB free
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llama_model_load_from_file_impl: using device SYCL0 (Intel(R) Arc(TM) Graphics) - 16762 MiB free

2025-05-06 22:58:39.256 [info] llama_model_loader: loaded meta data with 28 key-value pairs and 399 tensors from C:\Users\zhu\.ollama\models\blobs\sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 8B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 8B
llama_model_loader: - kv   5:                            general.license str              = apache-2.0
llama_model_loader: - kv   6:                          qwen3.block_count u32              = 36
llama_model_loader: - kv   7:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   8:                     qwen3.embedding_length u32              = 4096
llama_model_loader: - kv   9:                  qwen3.feed_forward_length u32              = 12288
llama_model_loader: - kv  10:                 qwen3.attention.head_count u32              = 32
llama_model_loader: - kv  11:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  12:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  15:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  16:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  17:                         tokenizer.ggml.pre str              = qwen2

2025-05-06 22:58:39.273 [info] llama_model_loader: - kv  18:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...

2025-05-06 22:58:39.278 [info] llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...

2025-05-06 22:58:39.296 [info] llama_model_loader: - kv  20:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  22:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - kv  27:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  145 tensors
llama_model_loader: - type  f16:   36 tensors
llama_model_loader: - type q4_K:  199 tensors
llama_model_loader: - type q6_K:   19 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.86 GiB (5.10 BPW) 

2025-05-06 22:58:39.366 [info] load: special tokens cache size = 26

2025-05-06 22:58:39.385 [info] load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 8.19 B
print_info: general.name     = Qwen3 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors

2025-05-06 22:58:39.392 [info] time=2025-05-06T22:58:39.392+08:00 level=INFO source=server.go:430 msg="starting llama server" cmd="C:\\ShengHuaBi\\ollama intel\\ollama-lib.exe runner --model C:\\Users\\zhu\\.ollama\\models\\blobs\\sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f --ctx-size 8192 --batch-size 512 --n-gpu-layers 999 --threads 6 --no-mmap --parallel 4 --port 61159"

2025-05-06 22:58:40.171 [info] time=2025-05-06T22:58:40.169+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1

2025-05-06 22:58:40.171 [info] time=2025-05-06T22:58:40.171+08:00 level=INFO source=server.go:605 msg="waiting for llama runner to start responding"

2025-05-06 22:58:40.172 [info] time=2025-05-06T22:58:40.172+08:00 level=INFO source=server.go:639 msg="waiting for server to become available" status="llm server error"

2025-05-06 22:58:40.219 [info] using override patterns: []

2025-05-06 22:58:40.224 [info] time=2025-05-06T22:58:40.218+08:00 level=INFO source=runner.go:874 msg="starting go runner"

2025-05-06 22:58:40.227 [info] ModelParams: {NumGpuLayers:999 MainGpu:0 UseMmap:false UseMlock:false TensorSplit:[] Progress:0x7ff61de2d8e0 VocabOnly:false OverrideTensors:[]}

2025-05-06 22:58:40.227 [info] time=2025-05-06T22:58:40.226+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang)
time=2025-05-06T22:58:40.227+08:00 level=INFO source=runner.go:935 msg="Server listening on 127.0.0.1:61159"

2025-05-06 22:58:40.354 [info] get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llama_model_load_from_file_impl: using device SYCL0 (Intel(R) Arc(TM) Graphics) - 16762 MiB free

2025-05-06 22:58:40.354 [info] get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llama_model_load_from_file_impl: using device SYCL0 (Intel(R) Arc(TM) Graphics) - 16762 MiB free

2025-05-06 22:58:40.383 [info] llama_model_loader: loaded meta data with 28 key-value pairs and 399 tensors from C:\Users\zhu\.ollama\models\blobs\sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 8B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 8B
llama_model_loader: - kv   5:                            general.license str              = apache-2.0
llama_model_loader: - kv   6:                          qwen3.block_count u32              = 36
llama_model_loader: - kv   7:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   8:                     qwen3.embedding_length u32              = 4096
llama_model_loader: - kv   9:                  qwen3.feed_forward_length u32              = 12288
llama_model_loader: - kv  10:                 qwen3.attention.head_count u32              = 32
llama_model_loader: - kv  11:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  12:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  15:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  16:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  17:                         tokenizer.ggml.pre str              = qwen2

2025-05-06 22:58:40.399 [info] llama_model_loader: - kv  18:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...

2025-05-06 22:58:40.404 [info] llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...

2025-05-06 22:58:40.421 [info] llama_model_loader: - kv  20:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  22:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - kv  27:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  145 tensors
llama_model_loader: - type  f16:   36 tensors
llama_model_loader: - type q4_K:  199 tensors
llama_model_loader: - type q6_K:   19 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.86 GiB (5.10 BPW) 

2025-05-06 22:58:40.423 [info] time=2025-05-06T22:58:40.423+08:00 level=INFO source=server.go:639 msg="waiting for server to become available" status="llm server loading model"

2025-05-06 22:58:40.492 [info] load: special tokens cache size = 26

2025-05-06 22:58:40.511 [info] load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 0
print_info: n_ctx_train      = 40960
print_info: n_embd           = 4096
print_info: n_layer          = 36
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06

2025-05-06 22:58:40.511 [info] print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 12288
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 40960
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = ?B
print_info: model params     = 8.19 B
print_info: general.name     = Qwen3 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory

2025-05-06 22:58:40.513 [info] get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
2025-05-06 22:58:40.513 [info] 

2025-05-06 22:58:40.759 [info] load_tensors: offloading 36 repeating layers to GPU

2025-05-06 22:58:40.759 [info] load_tensors: offloading output layer to GPU
load_tensors: offloaded 37/37 layers to GPU
load_tensors:        SYCL0 model buffer size =  4643.78 MiB
load_tensors:          CPU model buffer size =   333.84 MiB

2025-05-06 22:58:45.628 [info] llama_init_from_model: n_seq_max     = 4
llama_init_from_model: n_ctx         = 8192
llama_init_from_model: n_ctx_per_seq = 2048
llama_init_from_model: n_batch       = 2048
llama_init_from_model: n_ubatch      = 512
llama_init_from_model: flash_attn    = 0
llama_init_from_model: freq_base     = 1000000.0
llama_init_from_model: freq_scale    = 1
llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
Running with Environment Variables:
  GGML_SYCL_DEBUG: 0
  GGML_SYCL_DISABLE_OPT: 1
Build with Macros:
  GGML_SYCL_FORCE_MMQ: no
  GGML_SYCL_F16: no
Found 1 SYCL devices:
|  |                   |                                       |       |Max    |        |Max  |Global |                     |
|  |                   |                                       |       |compute|Max work|sub  |mem    |                     |
|ID|        Device Type|                                   Name|Version|units  |group   |group|size   |       Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|

2025-05-06 22:58:45.633 [info] | 0| [level_zero:gpu:0]|                     Intel Arc Graphics|  12.74|    128|    1024|   32| 17576M|            1.6.33184|
SYCL Optimization Feature:
|ID|        Device Type|Reorder|
|--|-------------------|-------|
| 0| [level_zero:gpu:0]|      Y|
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 36, can_shift = 1

2025-05-06 22:58:45.709 [info] llama_kv_cache_init:      SYCL0 KV buffer size =  1152.00 MiB
llama_init_from_model: KV self size  = 1152.00 MiB, K (f16):  576.00 MiB, V (f16):  576.00 MiB

2025-05-06 22:58:45.714 [info] llama_init_from_model:  SYCL_Host  output buffer size =     2.38 MiB

2025-05-06 22:58:45.715 [info] get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory

2025-05-06 22:58:45.717 [info] llama_init_from_model: pipeline parallelism enabled (n_copies=4)

2025-05-06 22:58:45.745 [info] llama_init_from_model:      SYCL0 compute buffer size =   432.77 MiB
llama_init_from_model:  SYCL_Host compute buffer size =    72.02 MiB
llama_init_from_model: graph nodes  = 1194
llama_init_from_model: graph splits = 3
time=2025-05-06T22:58:45.745+08:00 level=WARN source=runner.go:799 msg="%s: warming up the model with an empty run - please wait ... " !BADKEY=loadModel

2025-05-06 22:58:46.434 [info] time=2025-05-06T22:58:46.433+08:00 level=INFO source=server.go:644 msg="llama runner started in 6.26 seconds"

2025-05-06 22:59:11.524 [info] [GIN] 2025/05/06 - 22:59:11 | 200 |   32.4599713s |       127.0.0.1 | POST     "/v1/chat/completions"

2025-05-06 22:59:25.320 [info] [GIN] 2025/05/06 - 22:59:25 | 200 |   13.7902057s |       127.0.0.1 | POST     "/v1/chat/completions"

2025-05-06 22:59:47.054 [info] [GIN] 2025/05/06 - 22:59:47 | 200 |   21.7268569s |       127.0.0.1 | POST     "/v1/chat/completions"

2025-05-06 23:00:20.799 [info] [GIN] 2025/05/06 - 23:00:20 | 200 |   33.7410322s |       127.0.0.1 | POST     "/v1/chat/completions"

纯cpu运行?
2025-05-06 22:57:59.580 [info] time=2025-05-06T22:57:59.577+08:00 level=INFO source=gpu.go:377 msg=“no compatible GPUs were discovered”
time=2025-05-06T22:57:59.577+08:00 level=INFO source=types.go:130 msg=“inference compute” id=0 library=cpu variant=“” compute=“” driver=0.0 name=“” total=“31.4 GiB” available=“20.6 GiB”
直接跑这里能检测到gpu吗?
还有就是上下文改了吗…

time=2025-05-07T00:10:31.121+08:00 level=INFO source=server.go:107 msg="system memory" total="31.4 GiB" free="21.7 GiB" free_swap="18.5 GiB"
time=2025-05-07T00:10:31.121+08:00 level=WARN source=ggml.go:149 msg="key not found" key=qwen3.vision.block_count default=0
time=2025-05-07T00:10:31.122+08:00 level=INFO source=server.go:154 msg=offload library=cpu layers.requested=-1 layers.model=37 layers.offload=0 layers.split="" memory.available="[21.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.6 GiB" memory.required.partial="0 B" memory.required.kv="1.1 GiB" memory.required.allocations="[6.6 GiB]" memory.weights.total="4.1 GiB" memory.weights.repeating="4.1 GiB" memory.weights.nonrepeating="486.9 MiB" memory.graph.full="768.0 MiB" memory.graph.partial="768.0 MiB"
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llama_model_load_from_file_impl: using device SYCL0 (Intel(R) Arc(TM) Graphics) - 16762 MiB free
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llama_model_load_from_file_impl: using device SYCL0 (Intel(R) Arc(TM) Graphics) - 16762 MiB free
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llama_model_load_from_file_impl: using device SYCL0 (Intel(R) Arc(TM) Graphics) - 16762 MiB free
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llama_model_load_from_file_impl: using device SYCL0 (Intel(R) Arc(TM) Graphics) - 16762 MiB free
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llama_model_load_from_file_impl: using device SYCL0 (Intel(R) Arc(TM) Graphics) - 16762 MiB free
llama_model_loader: loaded meta data with 28 key-value pairs and 399 tensors from C:\Users\zhu\.ollama\models\blobs\sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 8B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 8B
llama_model_loader: - kv   5:                            general.license str              = apache-2.0
llama_model_loader: - kv   6:                          qwen3.block_count u32              = 36
llama_model_loader: - kv   7:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   8:                     qwen3.embedding_length u32              = 4096
llama_model_loader: - kv   9:                  qwen3.feed_forward_length u32              = 12288
llama_model_loader: - kv  10:                 qwen3.attention.head_count u32              = 32
llama_model_loader: - kv  11:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  12:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  15:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  16:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  17:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  18:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  20:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  22:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - kv  27:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  145 tensors
llama_model_loader: - type  f16:   36 tensors
llama_model_loader: - type q4_K:  199 tensors
llama_model_loader: - type q6_K:   19 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.86 GiB (5.10 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 8.19 B
print_info: general.name     = Qwen3 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2025-05-07T00:10:31.580+08:00 level=INFO source=server.go:430 msg="starting llama server" cmd="C:\\ollama-intel-2.3.0b20250429-win\\ollama-lib.exe runner --model C:\\Users\\zhu\\.ollama\\models\\blobs\\sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f --ctx-size 8192 --batch-size 512 --n-gpu-layers 999 --threads 6 --no-mmap --parallel 4 --port 51970"
time=2025-05-07T00:10:32.357+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2025-05-07T00:10:32.357+08:00 level=INFO source=server.go:605 msg="waiting for llama runner to start responding"
time=2025-05-07T00:10:32.358+08:00 level=INFO source=server.go:639 msg="waiting for server to become available" status="llm server error"
using override patterns: []
time=2025-05-07T00:10:32.418+08:00 level=INFO source=runner.go:874 msg="starting go runner"
time=2025-05-07T00:10:32.424+08:00 level=INFO source=ggml.go:109 msg=system CPU.0.LLAMAFILE=1 compiler=cgo(clang)
ModelParams: {NumGpuLayers:999 MainGpu:0 UseMmap:false UseMlock:false TensorSplit:[] Progress:0x7ff70456d8e0 VocabOnly:false OverrideTensors:[]}
time=2025-05-07T00:10:32.426+08:00 level=INFO source=runner.go:935 msg="Server listening on 127.0.0.1:51970"
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llama_model_load_from_file_impl: using device SYCL0 (Intel(R) Arc(TM) Graphics) - 16762 MiB free
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llama_model_load_from_file_impl: using device SYCL0 (Intel(R) Arc(TM) Graphics) - 16762 MiB free
llama_model_loader: loaded meta data with 28 key-value pairs and 399 tensors from C:\Users\zhu\.ollama\models\blobs\sha256-a3de86cd1c132c822487ededd47a324c50491393e6565cd14bafa40d0b8e686f (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3 8B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3
llama_model_loader: - kv   4:                         general.size_label str              = 8B
llama_model_loader: - kv   5:                            general.license str              = apache-2.0
llama_model_loader: - kv   6:                          qwen3.block_count u32              = 36
llama_model_loader: - kv   7:                       qwen3.context_length u32              = 40960
llama_model_loader: - kv   8:                     qwen3.embedding_length u32              = 4096
llama_model_loader: - kv   9:                  qwen3.feed_forward_length u32              = 12288
llama_model_loader: - kv  10:                 qwen3.attention.head_count u32              = 32
llama_model_loader: - kv  11:              qwen3.attention.head_count_kv u32              = 8
llama_model_loader: - kv  12:                       qwen3.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:     qwen3.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                 qwen3.attention.key_length u32              = 128
llama_model_loader: - kv  15:               qwen3.attention.value_length u32              = 128
llama_model_loader: - kv  16:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  17:                         tokenizer.ggml.pre str              = qwen2
time=2025-05-07T00:10:32.609+08:00 level=INFO source=server.go:639 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: - kv  18:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  19:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  20:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  22:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  24:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  25:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  26:               general.quantization_version u32              = 2
llama_model_loader: - kv  27:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  145 tensors
llama_model_loader: - type  f16:   36 tensors
llama_model_loader: - type q4_K:  199 tensors
llama_model_loader: - type q6_K:   19 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.86 GiB (5.10 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3
print_info: vocab_only       = 0
print_info: n_ctx_train      = 40960
print_info: n_embd           = 4096
print_info: n_layer          = 36
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 12288
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 40960
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = ?B
print_info: model params     = 8.19 B
print_info: general.name     = Qwen3 8B
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
load_tensors: offloading 36 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 37/37 layers to GPU
load_tensors:        SYCL0 model buffer size =  4643.78 MiB
load_tensors:          CPU model buffer size =   333.84 MiB
llama_init_from_model: n_seq_max     = 4
llama_init_from_model: n_ctx         = 8192
llama_init_from_model: n_ctx_per_seq = 2048
llama_init_from_model: n_batch       = 2048
llama_init_from_model: n_ubatch      = 512
llama_init_from_model: flash_attn    = 0
llama_init_from_model: freq_base     = 1000000.0
llama_init_from_model: freq_scale    = 1
llama_init_from_model: n_ctx_per_seq (2048) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
Running with Environment Variables:
  GGML_SYCL_DEBUG: 0
  GGML_SYCL_DISABLE_OPT: 1
Build with Macros:
  GGML_SYCL_FORCE_MMQ: no
  GGML_SYCL_F16: no
Found 1 SYCL devices:
|  |                   |                                       |       |Max    |        |Max  |Global |                     |
|  |                   |                                       |       |compute|Max work|sub  |mem    |                     |
|ID|        Device Type|                                   Name|Version|units  |group   |group|size   |       Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0| [level_zero:gpu:0]|                     Intel Arc Graphics|  12.74|    128|    1024|   32| 17576M|            1.6.33184|
SYCL Optimization Feature:
|ID|        Device Type|Reorder|
|--|-------------------|-------|
| 0| [level_zero:gpu:0]|      Y|
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 36, can_shift = 1
llama_kv_cache_init:      SYCL0 KV buffer size =  1152.00 MiB
llama_init_from_model: KV self size  = 1152.00 MiB, K (f16):  576.00 MiB, V (f16):  576.00 MiB
llama_init_from_model:  SYCL_Host  output buffer size =     2.38 MiB
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llama_init_from_model: pipeline parallelism enabled (n_copies=4)
llama_init_from_model:      SYCL0 compute buffer size =   432.77 MiB
llama_init_from_model:  SYCL_Host compute buffer size =    72.02 MiB
llama_init_from_model: graph nodes  = 1194
llama_init_from_model: graph splits = 3
time=2025-05-07T00:10:39.317+08:00 level=WARN source=runner.go:799 msg="%s: warming up the model with an empty run - please wait ... " !BADKEY=loadModel
time=2025-05-07T00:10:40.130+08:00 level=INFO source=server.go:644 msg="llama runner started in 7.77 seconds"
[GIN] 2025/05/07 - 00:10:49 | 200 |   18.3922253s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2025/05/07 - 00:11:55 | 200 |   55.7583543s |       127.0.0.1 | POST     "/api/chat"

不知道用没用到gpu,这是cmd的状态。速度很快。

生花笔中上下文按您说的数值,改小了的。cmd中没做任何改动,并且也不知道怎么改。

我试了两个软件调用qwen3:8b,都很流畅。是不是生花笔设置哪里有问题?我记得生花笔好像可以直接调用离线大模型?怎么设置的呢?

我看了下,版本是没问题,唯一差别就是,我调用的是openai兼容接口/v1/chat/completions,而后来贴的那个是/api/chat

您可以先关闭软件的自动启动ollama,然后重启软件

"shenghuabi.ollama.startup": false,

然后手动开启ollama,此时状态应该是未知,
如果没改端口的话,您就用相同的问题去进行提问,然后看看速度
这时候使用的Ollama就是您自己的,然后由软件调用

用来排查下是不是接口的问题

按你说的试了,还是显示/v1/chat/completions,还是很慢,能改进么?

软件不管怎么调用,都是/v1/chat/completions(openai兼容),如果是这样的话可能就是ollama问题
另外请问您说的慢,到底是多慢,有没有具体倍率?因为只是接口不同,内部运行逻辑还是一样的,不可能慢到肉眼可见.
我理解的慢就是明明应该用显卡,结果用了cpu导致的慢,或者说爆显存了慢,但是从日志上来看两者应该都是一样的
包括您说了替换后还是慢,但是这时候已经不是由软件启动Ollama了,所以不是启动的问题
最后我会去确认下,这两个接口在速度上是否有区别
希望您换下官方的Ollama(之前不是用intel的版本吗?),看看只用cpu跑,两个接口是否速度一样,如果速度一样,可能就是Intel改的定制版少改东西了,只改了一个接口?

毕竟官方版本只用cpu,不可能有下降空间了,这时候再慢就排除了设备问题,
这里我说的慢,是相对的慢,也就是/api/chat/v1/chat/completions下的比较

要多耗时10倍。我用obsidian和cherry studio调用,都很流畅,是不是生花笔在调用上需要特别设置,还是需要再优化?还是说生花笔只让用cpu模式?:man_facepalming:

  1. 软件仅调用ollama,并没有设置模式,也就是说和正常的一样
  2. 您需要按照上面的提示比较下官方软件(非intel改版)下,两个接口速度是否一样

因为官方软件并没有对intel适配,所以用的是cpu,不是说软件调用Ollama就是cpu,而是因为官方在您的电脑上只能用cpu,所以用官方版好定位问题,排除是否是接口不同,速度不同
也就是之前回复中说的,关闭软件自动启动ollama,然后让软件调用外部的(只要关闭就自动调用外面的,假如没改端口的话),和调用/api/chat(ollama run qwen3:8b)比较

  1. 每次冷启动时,需要加载模型,也就是说您在计算生成速度时候,应该从第一个文字生成开始计算,而不是从点击对话的一瞬间计算,因为所有模型都有加载过程

ollama run qwen3:8b, 这个命令执行的时候会先加载模型,然后才会输入内容对话,但是这样就容易让人忽略加载模型时间(因为仅加载模型,没有对话,好像就觉得加载不存在?但软件调用的时候,第一次对话时间=加载模型时间+生成时间,加载模型是自动的,让人感觉慢)

  1. 也就是目前需要确定两个问题,一个是2中接口是否影响速度问题,一个是3中计算速度要排除模型加载再计算

总之就是软件虽然替用户启动了ollama,但是并没有说只能调用cpu,就是单纯替用户启动,您如果觉得有问题,那么就不用软件启动,自己启动,然后正常执行即可
如果您觉得有问题,那么只有两种可能,一个是ollama的两个接口调用可能有问题,一个调用cpu,一个调用gpu(官方版我觉得不可能,因为我用了这么长时间,一直是调用gpu,所以可能存在Intel改版有问题),另一种可能就是计算速度时存在了问题,一个加了模型加载速度,一个没加模型加载速度,导致感觉好像存在了差异.除此之外没有其他可能了

谢谢了,搞不懂,生花笔我还是用qwen2.5:7b吧。:pray: