Skip to content

feature : Add support for the tuple datatype#173

Open
tmdeveloper007 wants to merge 4 commits intoarxlang:mainfrom
tmdeveloper007:ISSUE-32
Open

feature : Add support for the tuple datatype#173
tmdeveloper007 wants to merge 4 commits intoarxlang:mainfrom
tmdeveloper007:ISSUE-32

Conversation

@tmdeveloper007
Copy link
Contributor

@tmdeveloper007 tmdeveloper007 commented Feb 27, 2026

Added support for tuples to the irx compiler

  • We can now use tuples in the code.
  • The compiler now knows how to convert these tuples into LLVM structs. It handles them efficiently by using constants when possible or stack memory when needed.

@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Correctness/semantics: Non-constant path pushes an alloca pointer while the constant path pushes a first-class struct value. This pointer/value mix on result_stack is a footgun and likely to break downstream code expecting uniform rvalues. It also leaks mutability/identity for tuples that should be immutable values. Suggest loading the aggregate and pushing the value instead of the pointer:
    (L.1709)
    loaded: ir.Value = self._llvm.ir_builder.load(alloca, name="tuple.val")
    self.result_stack.append(loaded)
    Also update the docstring to reflect value semantics instead of “pushes the pointer.” (L.1651)

  • Performance: The current alloca+store path allocates in the entry block and will frequently escape (since you return the pointer), preventing mem2reg and causing avoidable stack traffic. The above change (load then push the value) makes the alloca trivially promotable (or removable), mitigating this. For an even better approach, consider building the aggregate in SSA without any alloca:
    (near this class, add a helper)
    def _build_tuple_value(self, elem_vals: list[ir.Value]) -> ir.Value:
    """Build tuple aggregate value without alloca."""
    elem_tys = [v.type for v in elem_vals]
    struct_ty = ir.LiteralStructType(elem_tys)
    agg: ir.Value = ir.Constant.undef(struct_ty)
    for idx, v in enumerate(elem_vals):
    agg = self._llvm.ir_builder.insert_value(agg, v, idx)
    return agg

  • Subtle lifetime/identity bug potential: Allocating in the entry block gives a single storage per function invocation; if the pointer identity escapes in loops, different tuple literals across iterations alias the same address. Returning a value (above) avoids this class of bugs.


@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Major correctness: Non-constant path returns a pointer to a stack alloca while constant path returns a first-class struct value. This representation mismatch can cause UB if the pointer escapes (e.g., returned or stored) and will likely break downstream consumers expecting a uniform value. Always return a struct value. Replace the final append with a load of the aggregate (L.1712):
    """Return struct value instead of pointer to avoid escaping stack alloca"""
    val: ir.Value = self._llvm.ir_builder.load(alloca, name="tuple.lit.val")
    self.result_stack.append(val)

  • Docstring mismatch: It currently states that the non-constant path “pushes the pointer.” Update to “pushes the value” to reflect the fix above (L.1651).

  • Performance follow-up (optional but impactful): After fixing the above, consider constructing the tuple via insertvalue instead of alloca+store+load to stay in SSA and avoid stack traffic.


@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • High risk: Tuple representation changes based on constness. Constant path returns a value struct, non-constant path returns a pointer (alloca). This makes the IR type of the same AST tuple unstable, will break downstream ops/ABI, and is a latent correctness bug (e.g., nested tuples become ptr-vs-value depending on folding). Make it uniform:

    • Prefer SSA aggregate: build an aggregate value with insert_value for non-constant path instead of alloca+store (L.1683-L.1698), and keep the constant path as-is. Example helper:
      def _build_tuple_aggregate(self, values: list[ir.Value]) -> ir.Value:
      """Build SSA aggregate for tuple"""
      struct_ty = ir.LiteralStructType([v.type for v in values])
      agg = ir.Constant.undef(struct_ty)
      for i, v in enumerate(values):
      agg = self._llvm.ir_builder.insert_value(agg, v, i)
      return agg
    • If you must keep pointer semantics, then also allocate for the all-constant case and store constants, and always push the pointer (L.1677-L.1681), to keep the type stable.
  • UB risk: Returning/passing this tuple literal by value will currently leak a pointer to stack memory (alloca in entry block) if the non-constant path is taken. Either switch to the SSA aggregate approach above, or ensure all escape paths materialize a value aggregate before leaving the function (L.1698-L.1700).

  • Insertion point clobbering: Temporarily moving the shared builder to the entry block and then back to the end of cur_bb can reorder instructions unexpectedly. Use a dedicated IRBuilder for the entry block to emit the alloca without touching the current insertion point (L.1683-L.1689):

    """Alloca at entry without clobbering insertion point"""

    entry_builder = ir.IRBuilder(entry_bb)
    alloca = entry_builder.alloca(struct_ty, name="tuple.lit")


@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Correctness: Mixed representation. Constant path returns a by-value struct, non-constant path returns a pointer. This will cause type mismatches in downstream uses (e.g., PHI, function args) and non-deterministic ABI. Suggest: after filling the alloca, load the struct and push the value to keep a consistent by-value tuple representation.

    • Change (L.1698): replace
      self.result_stack.append(alloca)
      with
      v_loaded = self._llvm.ir_builder.load(alloca, name="tuple.val")
      self.result_stack.append(v_loaded)
    • Update docstring to reflect “pushes the value” instead of pointer (L.1659).
  • Safety of builder position: Temporarily moving the global builder to the entry block and back risks inserting into a terminated block. At minimum, guard and fall back to position_before the terminator.

    • Add before position_at_end(cur_bb) (L.1688):
      if cur_bb.terminator is not None:
      self._llvm.ir_builder.position_before(cur_bb.terminator)

Alternatively, avoid mutating the current builder by allocating with a dedicated entry builder:

  • Change (L.1683):
    entry_builder = ir.IRBuilder(entry_bb)
    alloca = entry_builder.alloca(struct_ty, name="tuple.lit")

@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • High risk: Mixed representation. For all-constant tuples you return a first-class struct value, but for non-constant you return an alloca pointer. This will break downstream code that expects a uniform aggregate representation (e.g., GEP+load vs extractvalue). Make the result uniform. Easiest fix: always materialize an alloca in the function entry block and store elements (including the all-constant and empty cases), and push the pointer. Replace the constant fast-path and empty-tuple return with the alloca path. (L.28-L.31, L.38-L.40)
    Suggested change:
    def always_materialize_tuple_pointer(self, struct_ty: ir.Type, llvm_vals: list[ir.Value]) -> ir.AllocaInstr:
    """Always return an alloca pointer for tuple literals."""
    builder = self._llvm.ir_builder
    entry_bb = builder.function.entry_basic_block
    cur_bb = builder.block
    builder.position_at_start(entry_bb)
    alloca = builder.alloca(struct_ty, name="tuple.lit")
    builder.position_at_end(cur_bb)
    i32 = ir.IntType(32)
    for idx, v in enumerate(llvm_vals):
    field_ptr = builder.gep(alloca, [ir.Constant(i32, 0), ir.Constant(i32, idx)], inbounds=True)
    builder.store(v, field_ptr)
    return alloca

    Then:

    • For n == 0: allocate ir.LiteralStructType([]) and push the alloca instead of a constant. (L.28-L.31)
    • Remove the all-constant fast path and use the same alloca+store path. (L.38-L.40)
  • Correctness: Crash if invoked outside a function. Accessing self._llvm.ir_builder.function.entry_basic_block will fail at global/module scope. Add an explicit guard to raise a clear compile-time error. (L.43)
    Suggested change:
    def ensure_function_context(self) -> None:
    """Ensure tuple lowering with non-constant elements occurs inside a function."""
    if self._llvm.ir_builder.function is None:
    raise RuntimeError("LiteralTuple with non-constant elements must be lowered inside a function")

    Call ensure_function_context() before using entry_basic_block. (L.43)


tests/test_literal_tuple.py

  • Add an explicit check that the lowered struct is not packed to avoid silent ABI/layout bugs.

    • Suggest adding a small helper and using it in all tests:
      • Insert after HAS_LITERAL_TUPLE (L.16):
        def _assert_unpacked_literal_struct(const: ir.Constant) -> None:
        """Assert literal struct is unpacked."""
        assert isinstance(const, ir.Constant)
        assert isinstance(const.type, ir.LiteralStructType)
        assert not const.type.packed
      • Then call _assert_unpacked_literal_struct(const) after popping const in each test (L.31, L.59, L.91, L.114).
  • Assert the evaluation stack is empty after consuming the result to catch stray pushes/leaks in the visitor:

    • Insert after each pop:
      def _assert_empty_stack(visitor: LLVMLiteIRVisitor) -> None:
      """Assert translator result stack is empty after evaluation."""
      assert len(visitor.result_stack) == 0
    • Then call _assert_empty_stack(visitor) after the existing assertions (e.g., after L.32, L.65, L.95, L.116).
  • Ensure the heterogeneous case is truly f32 (not accidentally f64). Add:

    • assert const.type.elements[1] == ir.FloatType() (L.95).

@yuvimittal
Copy link
Member

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • High risk: Mixed representation. For all-constant tuples you return a first-class struct value, but for non-constant you return an alloca pointer. This will break downstream code that expects a uniform aggregate representation (e.g., GEP+load vs extractvalue). Make the result uniform. Easiest fix: always materialize an alloca in the function entry block and store elements (including the all-constant and empty cases), and push the pointer. Replace the constant fast-path and empty-tuple return with the alloca path. (L.28-L.31, L.38-L.40)
    Suggested change:
    def always_materialize_tuple_pointer(self, struct_ty: ir.Type, llvm_vals: list[ir.Value]) -> ir.AllocaInstr:
    """Always return an alloca pointer for tuple literals."""
    builder = self._llvm.ir_builder
    entry_bb = builder.function.entry_basic_block
    cur_bb = builder.block
    builder.position_at_start(entry_bb)
    alloca = builder.alloca(struct_ty, name="tuple.lit")
    builder.position_at_end(cur_bb)
    i32 = ir.IntType(32)
    for idx, v in enumerate(llvm_vals):
    field_ptr = builder.gep(alloca, [ir.Constant(i32, 0), ir.Constant(i32, idx)], inbounds=True)
    builder.store(v, field_ptr)
    return alloca
    Then:

    • For n == 0: allocate ir.LiteralStructType([]) and push the alloca instead of a constant. (L.28-L.31)
    • Remove the all-constant fast path and use the same alloca+store path. (L.38-L.40)
  • Correctness: Crash if invoked outside a function. Accessing self._llvm.ir_builder.function.entry_basic_block will fail at global/module scope. Add an explicit guard to raise a clear compile-time error. (L.43)
    Suggested change:
    def ensure_function_context(self) -> None:
    """Ensure tuple lowering with non-constant elements occurs inside a function."""
    if self._llvm.ir_builder.function is None:
    raise RuntimeError("LiteralTuple with non-constant elements must be lowered inside a function")
    Call ensure_function_context() before using entry_basic_block. (L.43)

tests/test_literal_tuple.py

  • Add an explicit check that the lowered struct is not packed to avoid silent ABI/layout bugs.

    • Suggest adding a small helper and using it in all tests:

      • Insert after HAS_LITERAL_TUPLE (L.16):
        def _assert_unpacked_literal_struct(const: ir.Constant) -> None:
        """Assert literal struct is unpacked."""
        assert isinstance(const, ir.Constant)
        assert isinstance(const.type, ir.LiteralStructType)
        assert not const.type.packed
      • Then call _assert_unpacked_literal_struct(const) after popping const in each test (L.31, L.59, L.91, L.114).
  • Assert the evaluation stack is empty after consuming the result to catch stray pushes/leaks in the visitor:

    • Insert after each pop:
      def _assert_empty_stack(visitor: LLVMLiteIRVisitor) -> None:
      """Assert translator result stack is empty after evaluation."""
      assert len(visitor.result_stack) == 0
    • Then call _assert_empty_stack(visitor) after the existing assertions (e.g., after L.32, L.65, L.95, L.116).
  • Ensure the heterogeneous case is truly f32 (not accidentally f64). Add:

    • assert const.type.elements[1] == ir.FloatType() (L.95).

@tmdeveloper007 , please refer to the first note given by reviewer

@github-actions
Copy link

github-actions bot commented Mar 3, 2026

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Empty tuple case likely breaks: llvmlite may not accept LiteralStructType([]) and alloca of a zero-sized struct is dubious. Either reject explicitly or special-case it. Suggest rejecting for now with a clear error. (L.1675)
    def _reject_empty_tuple(self, node: astx.LiteralTuple) -> None:
    """Reject empty tuple until supported."""
    if not node.elements:
    raise RuntimeError("Empty tuples are not supported yet")

  • Inconsistent exception type on element lowering failure. Use a consistent error type (you already use RuntimeError above). (L.1670)
    def _raise_tuple_element_lowering_error(self) -> None:
    """Signal tuple element lowering failure."""
    raise RuntimeError("LiteralTuple: failed to lower an element")

  • Builder position safety: builder.block can be None even inside a function (e.g., before a block is created/positioned). Guard before using it to restore insertion point. (L.1684)
    def _ensure_insertion_block(self, builder: ir.IRBuilder) -> None:
    """Ensure the IRBuilder has a current insertion block."""
    if builder.block is None:
    raise RuntimeError("No active insertion block for tuple lowering")

  • Optional but impactful perf: if all elements are constants, consider emitting a constant literal struct and pushing it (by value) instead of forcing an alloca. This avoids stack traffic and improves mem2reg/SROA in hot loops. (L.1696)


tests/test_literal_tuple.py

  • Tests hard-require that LiteralTuple lowers to an ir.AllocaInstr. This locks in an implementation detail and will break if tuples are later represented as SSA aggregates or constants. It also forces alloca of empty structs, which may fail LLVM verification on some versions. Prefer asserting the produced LLVM type (struct shape/unpacked) rather than the specific instruction kind. Suggest relaxing the helper to work with any LLVM value, not just allocas (L.17):

    def _assert_unpacked_literal_struct(value: ir.Value) -> None:
    """Assert value is or points to an unpacked literal struct."""
    ty = getattr(value.type, "pointee", value.type)
    assert isinstance(ty, ir.LiteralStructType)
    assert not ty.packed

    And derive the struct type similarly where you currently access result.type.pointee (e.g., use ty = getattr(result.type, "pointee", result.type) before element checks) (L.53, L.83, L.117, L.142).

  • _setup_function_context uses private internals (visitor._llvm.module / visitor._llvm.ir_builder) (L.33, L.35). This is brittle and will break on internal refactors. Consider exposing a tiny public helper on the visitor for creating a dummy function or for accessing the current IRBuilder, and use that in tests.

  • The dummy functions created in tests have no terminator, which can cause verifier failures if module verification is enabled later. Add a ret void after assertions in each test to keep the module well-formed (L.54, L.89, L.122, L.145):

    no title needed (single-line change)

    visitor._llvm.ir_builder.ret_void()


@github-actions
Copy link

github-actions bot commented Mar 3, 2026

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Correctness: empty tuples likely produce invalid IR with a zero-field struct on some LLVM/llvmlite versions. Add an explicit guard before building the struct type. (L.1673)
    Code:
    def _reject_empty_tuple(self) -> None:
    """Reject unsupported empty tuple lowering."""
    raise NotImplementedError("LiteralTuple: empty tuples are not supported yet")

  • Correctness/Safety: you push an alloca pointer onto the stack. If a tuple literal is returned or passed by-value, this can escape stack memory and cause UB. Ensure callers materialize a by-value aggregate when needed. Provide a helper and use it at return/by-value call sites. (L.1698)
    Code:
    def materialize_tuple_value(self, tuple_ptr: ir.Value) -> ir.Value:
    """Materialize a by-value tuple aggregate from a stack pointer."""
    return self._llvm.ir_builder.load(tuple_ptr)

  • Performance: always materializing an alloca + stores for constant-only tuples can regress hot paths. Consider emitting a constant aggregate (or a shared global) when all elements are constants. (L.1668)


tests/test_literal_tuple.py

  • The tests hardcode creating an alloca for tuples. This locks in a suboptimal lowering and may block future optimizations (e.g., emitting constant aggregate values in SSA form). Consider relaxing to assert the lowered struct type rather than the allocation mechanism.

  • Using private internals visitor._llvm.module and visitor._llvm.ir_builder makes the tests brittle against internal refactors. If you keep this, at least make the helper resilient.

  • Function name collision risk if the underlying module is reused across tests. Make the dummy function name unique.
    Suggested change (L.12): add
    from uuid import uuid4
    Suggested change (L.33): replace with
    fn = ir.Function(visitor.llvm.module, fn_ty, f"test_tuple_fn{uuid4().hex}")


@tmdeveloper007
Copy link
Contributor Author

tmdeveloper007 commented Mar 3, 2026

  • Unified the LiteralTuple representation to always use alloca pointers.
  • Also updated the test suite with strengthened assertions.

Review required.
@yuvimittal

@github-actions
Copy link

github-actions bot commented Mar 4, 2026

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Correctness: result_stack.pop() can raise IndexError if an element visitor didn’t push a value (before your None check). Guard it explicitly. Suggest adding a helper and using it here.

    • Change (L.1675):
      v = self._pop_result()
    • Add helper (near L.1652):
      def _pop_result(self) -> ir.Value:
      """Pop a non-null value from result_stack or raise a clear error."""
      try:
      v = self.result_stack.pop()
      except IndexError as e:
      raise RuntimeError("LiteralTuple: missing element result on stack") from e
      if v is None:
      raise RuntimeError("LiteralTuple: failed to lower an element.")
      return v
  • Correctness/Maintainability: Mutating the shared IRBuilder’s insertion point to the entry block and then restoring with position_at_end(cur_bb) can reorder instructions or lose the original precise insertion point if it wasn’t at block end. Use a dedicated builder for the entry-block alloca instead of moving the shared one.

    • Replace (L.1691–L.1693):
      alloca = self._alloca_in_entry(struct_ty, "tuple.lit")
    • Add helper (near L.1658):
      def _alloca_in_entry(self, ty: ir.Type, name: str) -> ir.instructions.AllocaInstr:
      """Alloca in function entry block without disturbing current insertion point."""
      fn = self._llvm.ir_builder.function
      if fn is None:
      raise RuntimeError("Alloca requires a function context")
      eb = ir.IRBuilder(fn.entry_basic_block)
      eb.position_at_start(fn.entry_basic_block)
      return eb.alloca(ty, name=name)
  • Edge case: Empty tuples. ir.LiteralStructType([]) and alloca-of-empty may be rejected on some LLVM/llvmlite versions. Consider an explicit check after struct_ty creation to either handle the empty case or raise a clear, actionable error to avoid opaque verifier/assert failures. (L.1682)


tests/test_literal_tuple.py

  • Material gap: the tests only assert the allocated struct type, not that the tuple literal’s values are actually initialized. A buggy lowering that only allocates the struct (no stores/insertvalue) would still pass. Add a helper to assert there are stores into the tuple fields and call it in each test. For robustness, accept either a single store of the whole struct to the alloca or per-element stores via GEPs.

    Suggested additions:

    • Add helper (L.35):
      def _assert_tuple_initialized(visitor: LLVMLiteIRVisitor, alloca: ir.AllocaInstr, expected_elem_count: int) -> None:
      """Assert the tuple alloca receives stores into its fields."""
      bb = visitor._llvm.ir_builder.block
      store_count = 0
      for inst in bb.instructions:
      if isinstance(inst, ir.StoreInstr):
      ptr = inst.operands[1]
      if ptr is alloca:
      store_count += 1
      elif isinstance(ptr, ir.GEPInstr) and ptr.operands and ptr.operands[0] is alloca:
      store_count += 1
      assert store_count == expected_elem_count
    • Use it in tests:
      • After asserting elements count in empty tuple (L.49): _assert_tuple_initialized(visitor, result, 0)
      • Before _assert_empty_stack in homogeneous ints (L.81): _assert_tuple_initialized(visitor, result, elem_count)
      • Before _assert_empty_stack in heterogeneous (L.111): _assert_tuple_initialized(visitor, result, elem_count)
      • Before _assert_empty_stack in single-element (L.131): _assert_tuple_initialized(visitor, result, 1)
  • Fragility/break risk: tests rely on private internals (visitor._llvm, visitor.result_stack). If these internals change, tests will break even if behavior is correct. Consider exposing a tiny public test hook on the visitor to open a function context and to fetch the last result, e.g.:

    Suggested API to production code:

    • (No line in this file)
      def begin_test_function(self) -> None:
      """Create and position the builder at a fresh entry block for testing."""
    • (No line in this file)
      def pop_result(self) -> ir.Value:
      """Pop and return the last produced value from the translation stack."""
  • Minor but correctness/perf-relevant: ensure allocas are emitted in the entry block. Add an assertion to the tests to catch regressions in insertion point:

    • Add helper (L.35):
      def _assert_alloca_in_entry(visitor: LLVMLiteIRVisitor, alloca: ir.AllocaInstr) -> None:
      """Assert alloca is placed in the current entry block."""
      assert alloca.parent is visitor._llvm.ir_builder.block
    • Call it after _assert_unpacked_literal_struct in each test (L.47, L.73, L.104, L.127).

@github-actions
Copy link

github-actions bot commented Mar 5, 2026

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

ChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

tests/test_literal_tuple.py

ChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

@github-actions
Copy link

github-actions bot commented Mar 6, 2026

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

ChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

tests/test_literal_tuple.py

ChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

@github-actions
Copy link

github-actions bot commented Mar 6, 2026

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

ChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

tests/test_literal_tuple.py

ChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants