Skip to content

Add Gemma 4 MLX install-path support#19065

Draft
zeel2104 wants to merge 1 commit intopytorch:mainfrom
zeel2104:gemma4-mlx-install-path
Draft

Add Gemma 4 MLX install-path support#19065
zeel2104 wants to merge 1 commit intopytorch:mainfrom
zeel2104:gemma4-mlx-install-path

Conversation

@zeel2104
Copy link
Copy Markdown

@zeel2104 zeel2104 commented Apr 23, 2026

Summary

Enable Gemma 4 on the MLX backend through the HuggingFace export/run path.

This PR:

  • adds Gemma 4 support for backends/mlx/examples/llm/export_llm_hf.py
  • adds Gemma 4 text-only support to backends/mlx/examples/llm/run_llm_hf.py
  • fixes Gemma 4 hybrid-cache handling for shared KV layout and mixed sliding/full-attention cache types
  • makes the normal installed package path work without PYTHONPATH
  • limits MLX docs and CI coverage to the exact Gemma 4 configuration that was validated

This PR does not add Gemma 4 support to the internal export_llm / examples/models/gemma4/ path.

Test plan

Manual validation on Apple Silicon macOS using the installed package from .venv/site-packages:

python -m executorch.backends.mlx.examples.llm.export_llm_hf \
  --model-id google/gemma-4-E2B-it \
  --output /tmp/gemma4_custom_qlinear_only_installed.pte \
  --qlinear 4w \
  --use-custom-sdpa \
  --use-custom-kv-cache

python -m executorch.backends.mlx.examples.llm.run_llm_hf \
  --pte /tmp/gemma4_custom_qlinear_only_installed.pte \
  --model-id google/gemma-4-E2B-it \
  --prompt "What is the capital of France?" \
  --max-new-tokens 50

###Validation

-installed import path resolves from .venv/lib/python3.12/site-packages/executorch/...
-MLXBackend is registered from the installed package
-export succeeds for google/gemma-4-E2B-it with --qlinear 4w --use-custom-sdpa --use-custom-kv-cache
-runtime succeeds without PYTHONPATH
-generated output contains Paris

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 23, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19065

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

⚠️ 12 Awaiting Approval

As of commit 5f455a2 with merge base c391738 (image):

AWAITING APPROVAL - The following workflows need approval before CI can run:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla
Copy link
Copy Markdown

meta-cla Bot commented Apr 23, 2026

Hi @zeel2104!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@meta-cla
Copy link
Copy Markdown

meta-cla Bot commented Apr 23, 2026

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 23, 2026
@zeel2104 zeel2104 marked this pull request as draft April 23, 2026 12:23

# Check if model uses sliding window attention
sliding_window = getattr(model.config, "sliding_window", None)
# Check if model uses sliding window attention. Multimodal configs like
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this regress gemma3?

logger = logging.getLogger(__name__)


def _iter_mlx_backend_candidates():
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code should not be needed. Did you do:

python install_executorch.py --editable

on a mac machine with xcode installed? If so, in the install logs, did you see a comment about MLX installation being skipped for some reason?

}

try {
std::cerr << "MLX init: constructing handle" << std::endl;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove debug logging?

@metascroy
Copy link
Copy Markdown
Contributor

Looks fantastic!

A couple questions:

  1. Are we regression gemma3 support at all?
  2. Does it work without "--use-custom-sdpa --use-custom-kv-cache" flags? If not, why? (This PR can stay focused on the custom path, I'm just curious what went wrong)
  3. Did you try embedding quant? If so, did something go wrong?

Comment thread .github/workflows/mlx.yml

QEMBEDDING_ARGS="--qembedding ${QCONFIG}"
if [ "${MODEL_ID}" = "google/gemma-4-E2B-it" ]; then
QEMBEDDING_ARGS=""
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why no embeeding?


logger.info(f"Loading model from {pte_path}...")
et_runtime = Runtime.get()
et_runtime = _ensure_mlx_backend_registered()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shouldn't be needed, see comment on the install process.

# Decode only the newly generated tokens (not the input prompt)
new_tokens = generated_tokens[seq_len:]
generated_text = tokenizer.decode(new_tokens, skip_special_tokens=True)
generated_text = text_processor.decode(new_tokens, skip_special_tokens=True)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this break the path where uses_processor=False?

Can we unify these two paths somehow?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants