-
Notifications
You must be signed in to change notification settings - Fork 24
Open
Labels
enhancementNew feature or requestNew feature or requesthelp wantedExtra attention is neededExtra attention is needed
Description
onert would like to run llama model.
Llama has attention block, which is defined as LlamaAttention in modeling_llama.py from HF.
I would like to merge the all opcodes (includign RoPE) from LlamaAttention as 1 opcode ( attention ).
(All I need is attention.)
(I am going to add attention op in circle schema.)
It looks similar to #90.
But it is different:
- llama model does not use torch.nn.functional.scaled_dot_product_attention to implement LlamaAttention.
- scaled_dot_product_attention is based on standard transformer attention. (It does not have RoPE).
What would be the best way to do this?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or requesthelp wantedExtra attention is neededExtra attention is needed