Conversation
- Add Q4_1 repacking logic (Q4_1x4x2) in ggml-hexagon.cpp - Add Q4_1 vector dot product kernels in matmul-ops.c - Enable Q4_1 support in HTP backend dispatch - Update htp-msg.h with new type definitions The Q4_1x4x2 format packs 256 elements into 160 bytes: - 128 bytes of 4-bit quants (0-15) - 16 bytes of scales (fp16) - 16 bytes of mins (fp16) Kernels compute dot product as: sum((d*q + m) * y) = d*sum(q*y) + m*sum(y). Co-authored-by: max-krasnyansky <1380796+max-krasnyansky@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
- Define HTP_TYPE_Q4_1 and QK_Q4_1x4x2 (256) - Add Q4_1 repacking logic (Q4_1x4x2) in ggml-hexagon.cpp - Add Q4_1 vector dot product kernels in matmul-ops.c - Enable Q4_1 support in HTP backend dispatch The Q4_1x4x2 format packs 256 elements into 160 bytes: - 128 bytes of 4-bit quants (0-15) - 16 bytes of scales (fp16) - 16 bytes of mins (fp16) Kernels compute dot product as: sum((d*q + m) * y) = d*sum(q*y) + m*sum(y). Co-authored-by: max-krasnyansky <1380796+max-krasnyansky@users.noreply.github.com>
- Define HTP_TYPE_Q4_1 (weights) and HTP_TYPE_Q8_1 (dynamic activations) - Define QK_Q4_1x4x2 and QK_Q8_1x4x2 (256 block size) - Add Q4_1 repacking logic (Q4_1x4x2) in ggml-hexagon.cpp - Add Q4_1 vector dot product kernels in matmul-ops.c - Implement dynamic Q8_1 quantization for src1 to optimize Q4_1 dot product - Enable Q4_1 support in HTP backend dispatch The Q4_1x4x2 format packs 256 elements into 160 bytes: - 128 bytes of 4-bit quants (0-15) - 16 bytes of scales (fp16) - 16 bytes of mins (fp16) Kernels compute dot product as: sum((d*q + m) * y) = d*sum(q*y) + m*sum(y). The Q8_1 dynamic quantization precomputes sum(y) to make this efficient. Co-authored-by: max-krasnyansky <1380796+max-krasnyansky@users.noreply.github.com>
- Define HTP_TYPE_Q4_1 (weights) and HTP_TYPE_Q8_1 (dynamic activations) - Define QK_Q4_1x4x2 and QK_Q8_1x4x2 (256 block size) - Add Q4_1 repacking logic (Q4_1x4x2) in ggml-hexagon.cpp - Add Q4_1 vector dot product kernels in matmul-ops.c - Implement dynamic Q8_1 quantization for src1 to optimize Q4_1 dot product - Enable Q4_1 support in HTP backend dispatch The Q4_1x4x2 format packs 256 elements into 160 bytes: - 128 bytes of 4-bit quants (0-15) - 16 bytes of scales (fp16) - 16 bytes of mins (fp16) Kernels compute dot product as: sum((d*q + m) * y) = d*sum(q*y) + m*sum(y). The Q8_1 dynamic quantization precomputes sum(y) to make this efficient. Co-authored-by: max-krasnyansky <1380796+max-krasnyansky@users.noreply.github.com>
Implemented Q4_1 support for Hexagon backend MUL_MAT operation.
This involves:
HTP_TYPE_Q4_1andQK_Q4_1x4x2(256 block size).repack_q4_1_q4_1x4x2to convert standardQ4_1blocks to Hexagon-friendly format.vec_dot_q4_1x4x2_q8x4x2_...) to perform dot product withQ8_0(orQ8_0-quantized) source.op_matmuldispatch logic to handleQ4_1.The implementation follows the pattern of existing
Q4_0support but adds handling for the minimum valueminherent inQ4_1quantization.PR created automatically by Jules for task 10953119451036652015 started by @max-krasnyansky