Add Gemma 4 MLX install-path support#19065
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19065
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below:
|
|
Hi @zeel2104! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
This PR needs a
|
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
|
|
||
| # Check if model uses sliding window attention | ||
| sliding_window = getattr(model.config, "sliding_window", None) | ||
| # Check if model uses sliding window attention. Multimodal configs like |
There was a problem hiding this comment.
Does this regress gemma3?
| logger = logging.getLogger(__name__) | ||
|
|
||
|
|
||
| def _iter_mlx_backend_candidates(): |
There was a problem hiding this comment.
This code should not be needed. Did you do:
python install_executorch.py --editable
on a mac machine with xcode installed? If so, in the install logs, did you see a comment about MLX installation being skipped for some reason?
| } | ||
|
|
||
| try { | ||
| std::cerr << "MLX init: constructing handle" << std::endl; |
|
Looks fantastic! A couple questions:
|
|
|
||
| QEMBEDDING_ARGS="--qembedding ${QCONFIG}" | ||
| if [ "${MODEL_ID}" = "google/gemma-4-E2B-it" ]; then | ||
| QEMBEDDING_ARGS="" |
|
|
||
| logger.info(f"Loading model from {pte_path}...") | ||
| et_runtime = Runtime.get() | ||
| et_runtime = _ensure_mlx_backend_registered() |
There was a problem hiding this comment.
This shouldn't be needed, see comment on the install process.
| # Decode only the newly generated tokens (not the input prompt) | ||
| new_tokens = generated_tokens[seq_len:] | ||
| generated_text = tokenizer.decode(new_tokens, skip_special_tokens=True) | ||
| generated_text = text_processor.decode(new_tokens, skip_special_tokens=True) |
There was a problem hiding this comment.
Does this break the path where uses_processor=False?
Can we unify these two paths somehow?
Summary
Enable Gemma 4 on the MLX backend through the HuggingFace export/run path.
This PR:
backends/mlx/examples/llm/export_llm_hf.pybackends/mlx/examples/llm/run_llm_hf.pyPYTHONPATHThis PR does not add Gemma 4 support to the internal
export_llm/examples/models/gemma4/path.Test plan
Manual validation on Apple Silicon macOS using the installed package from
.venv/site-packages:python -m executorch.backends.mlx.examples.llm.export_llm_hf \ --model-id google/gemma-4-E2B-it \ --output /tmp/gemma4_custom_qlinear_only_installed.pte \ --qlinear 4w \ --use-custom-sdpa \ --use-custom-kv-cache python -m executorch.backends.mlx.examples.llm.run_llm_hf \ --pte /tmp/gemma4_custom_qlinear_only_installed.pte \ --model-id google/gemma-4-E2B-it \ --prompt "What is the capital of France?" \ --max-new-tokens 50###Validation
-installed import path resolves from .venv/lib/python3.12/site-packages/executorch/...
-MLXBackend is registered from the installed package
-export succeeds for google/gemma-4-E2B-it with --qlinear 4w --use-custom-sdpa --use-custom-kv-cache
-runtime succeeds without PYTHONPATH
-generated output contains Paris