Skip to content

Hexagon: Add Q4_1 matmul support#52

Open
max-krasnyansky wants to merge 4 commits intomasterfrom
hexagon-q4_1-matmul-10953119451036652015
Open

Hexagon: Add Q4_1 matmul support#52
max-krasnyansky wants to merge 4 commits intomasterfrom
hexagon-q4_1-matmul-10953119451036652015

Conversation

@max-krasnyansky
Copy link
Owner

Implemented Q4_1 support for Hexagon backend MUL_MAT operation.
This involves:

  1. Defining HTP_TYPE_Q4_1 and QK_Q4_1x4x2 (256 block size).
  2. Implementing repack_q4_1_q4_1x4x2 to convert standard Q4_1 blocks to Hexagon-friendly format.
  3. Implementing HVX kernels (vec_dot_q4_1x4x2_q8x4x2_...) to perform dot product with Q8_0 (or Q8_0-quantized) source.
  4. Updating op_matmul dispatch logic to handle Q4_1.

The implementation follows the pattern of existing Q4_0 support but adds handling for the minimum value m inherent in Q4_1 quantization.


PR created automatically by Jules for task 10953119451036652015 started by @max-krasnyansky

- Add Q4_1 repacking logic (Q4_1x4x2) in ggml-hexagon.cpp
- Add Q4_1 vector dot product kernels in matmul-ops.c
- Enable Q4_1 support in HTP backend dispatch
- Update htp-msg.h with new type definitions

The Q4_1x4x2 format packs 256 elements into 160 bytes:
- 128 bytes of 4-bit quants (0-15)
- 16 bytes of scales (fp16)
- 16 bytes of mins (fp16)

Kernels compute dot product as: sum((d*q + m) * y) = d*sum(q*y) + m*sum(y).

Co-authored-by: max-krasnyansky <1380796+max-krasnyansky@users.noreply.github.com>
@google-labs-jules
Copy link
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@github-actions github-actions bot added the ggml label Feb 20, 2026
google-labs-jules bot and others added 3 commits February 20, 2026 06:07
- Define HTP_TYPE_Q4_1 and QK_Q4_1x4x2 (256)
- Add Q4_1 repacking logic (Q4_1x4x2) in ggml-hexagon.cpp
- Add Q4_1 vector dot product kernels in matmul-ops.c
- Enable Q4_1 support in HTP backend dispatch

The Q4_1x4x2 format packs 256 elements into 160 bytes:
- 128 bytes of 4-bit quants (0-15)
- 16 bytes of scales (fp16)
- 16 bytes of mins (fp16)

Kernels compute dot product as: sum((d*q + m) * y) = d*sum(q*y) + m*sum(y).

Co-authored-by: max-krasnyansky <1380796+max-krasnyansky@users.noreply.github.com>
- Define HTP_TYPE_Q4_1 (weights) and HTP_TYPE_Q8_1 (dynamic activations)
- Define QK_Q4_1x4x2 and QK_Q8_1x4x2 (256 block size)
- Add Q4_1 repacking logic (Q4_1x4x2) in ggml-hexagon.cpp
- Add Q4_1 vector dot product kernels in matmul-ops.c
- Implement dynamic Q8_1 quantization for src1 to optimize Q4_1 dot product
- Enable Q4_1 support in HTP backend dispatch

The Q4_1x4x2 format packs 256 elements into 160 bytes:
- 128 bytes of 4-bit quants (0-15)
- 16 bytes of scales (fp16)
- 16 bytes of mins (fp16)

Kernels compute dot product as: sum((d*q + m) * y) = d*sum(q*y) + m*sum(y).
The Q8_1 dynamic quantization precomputes sum(y) to make this efficient.

Co-authored-by: max-krasnyansky <1380796+max-krasnyansky@users.noreply.github.com>
- Define HTP_TYPE_Q4_1 (weights) and HTP_TYPE_Q8_1 (dynamic activations)
- Define QK_Q4_1x4x2 and QK_Q8_1x4x2 (256 block size)
- Add Q4_1 repacking logic (Q4_1x4x2) in ggml-hexagon.cpp
- Add Q4_1 vector dot product kernels in matmul-ops.c
- Implement dynamic Q8_1 quantization for src1 to optimize Q4_1 dot product
- Enable Q4_1 support in HTP backend dispatch

The Q4_1x4x2 format packs 256 elements into 160 bytes:
- 128 bytes of 4-bit quants (0-15)
- 16 bytes of scales (fp16)
- 16 bytes of mins (fp16)

Kernels compute dot product as: sum((d*q + m) * y) = d*sum(q*y) + m*sum(y).
The Q8_1 dynamic quantization precomputes sum(y) to make this efficient.

Co-authored-by: max-krasnyansky <1380796+max-krasnyansky@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant