``` ramalama run gemma3:12b :: initializing oneAPI environment ... entrypoint.sh: BASH_VERSION = 5.2.32(1)-release args: Using "$@" for setvars.sh arguments: llama-run -c 2048 --temp 0.8 --ngl 999 /mnt/models/model.file :: compiler -- latest :: mkl -- latest :: tbb -- latest :: umf -- latest :: oneAPI environment initialized :: Loading modelget_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma3' llama_model_load_from_file_impl: failed to load model initialize_model: error: unable to load model from file: /mnt/models/model.file ```