Skip to content

Conversation

@nathaliafab
Copy link
Member

Este PR integra uma nova ferramenta de geração de testes ao SMAT, com as seguintes alterações principais:

  • Novo gerador:
    A implementação principal do novo gerador baseado no Code Llama está em
    nimrod/test_suite_generation/generators/codellama_test_suite_generator.py.

  • Atualização no tipo de entrada (targets):
    O tipo foi alterado de Dict[str, List[str]] para
    Dict[str, Union[List[Dict[str, str]], List[str]]]
    para permitir que os métodos venham acompanhados de um resumo das mudanças. Assim, a entrada pode ter o formato:

    {
      "method": "methodname()",
      "leftChangesSummary": "Left does...",
      "rightChangesSummary": "Right does..."
    }

    ou simplesmente "methodname()".
    Para suportar os dois formatos, ajustes também foram feitos nos geradores existentes (EvoSuite e Randoop).

  • Workflow atualizado:
    Pequenas alterações no workflow do GitHub (sem impacto esperado no comportamento atual).

  • Geração de logs:

    • Em nimrod/test_suite_generation/generators/test_suite_generator.py, adicionei a criação e atualização de um arquivo de log com os erros de compilação dos testes.
    • Em nimrod/test_suites_execution/test_suite_executor.py, agora também há log da execução dos testes, permitindo inspecionar os resultados individuais.
      Além disso, _parse_test_results_from_output foi ajustado para lidar com o novo gerador, sem impacto nos demais.
  • Tratamento de falhas com JaCoCo:
    Em nimrod/output_generation/semantic_conflicts_output_generator.py, foi adicionado um bloco try-except-finally para garantir que ferramentas que não dependem do JaCoCo não sejam interrompidas em caso de falha.

  • Configuração do ambiente:
    O arquivo nimrod/tests/env-config.json foi atualizado com novos campos necessários para o uso do gerador Code Llama (o campo api_url ainda precisa ser configurado corretamente).

  • Logging centralizado:
    O sistema de logging foi revisado para registrar as saídas em arquivo, em vez de apenas no terminal.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR integrates a new Code Llama-based test generation tool into SMAT, enabling AI-assisted test creation alongside existing tools (EvoSuite and Randoop). The implementation leverages an LLM API to generate JUnit tests based on method context, class structure, and merge conflict summaries.

Key Changes:

  • Added CodellamaTestSuiteGenerator with AST-based code parsing, prompt generation, and LLM API integration
  • Extended targets type from Dict[str, List[str]] to Dict[str, Union[List[Dict[str, str]], List[str]]] to support method-level change summaries
  • Enhanced logging system with file-based output and added compilation/execution result tracking

Reviewed Changes

Copilot reviewed 18 out of 20 changed files in this pull request and generated 15 comments.

Show a summary per file
File Description
setup.py Updated pygithub and gitpython versions
nimrod/utils.py Added JSON load/save utility functions
nimrod/tools/codellama.py Added Code Llama tool integration class
nimrod/tests/utils.py Refactored logging configuration with file handler support
nimrod/tests/example/pom.xml Updated JUnit version
nimrod/tests/env-config.json Added Code Llama API configuration parameters
nimrod/test_suites_execution/test_suite_executor.py Added execution result logging and Code Llama-specific test result parsing
nimrod/test_suite_generation/generators/test_suite_generator.py Added compilation error logging functionality
nimrod/test_suite_generation/generators/randoop_test_suite_generator.py Updated to support new targets type with dict/string items
nimrod/test_suite_generation/generators/evosuite_test_suite_generator.py Updated to support new targets type with dict/string items
nimrod/test_suite_generation/generators/codellama_test_suite_generator.py New generator implementation with API client, AST parsing, and prompt management
nimrod/setup_tools/tools.py Added CODELLAMA to tools enum
nimrod/output_generation/test_suites_output_generator.py Added targets to test suite output report
nimrod/output_generation/semantic_conflicts_output_generator.py Added error handling for coverage execution and updated targets type handling
nimrod/output_generation/output_generator.py Modified to append data to reports instead of overwriting
nimrod/core/merge_scenario_under_analysis.py Updated targets type signature
nimrod/main.py Added Code Llama generator initialization and fixed f-string
.github/workflows/main.yml Made workflow paths repository-agnostic

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

nathaliafab and others added 4 commits October 30, 2025 09:59
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…enerator.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@pauloborba pauloborba requested a review from heitorado October 30, 2025 23:56
@pauloborba
Copy link
Member

@nathaliafab verifica as falhas, por favor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants