To create ComplexCodeEval, the authors sourced Java and Python samples from GitHub repositories that relied on popular third-party libraries. Each sample was annotated with relevant metadata like API references, docstrings, and timestamps, simulating real-world coding tasks.
Ten LCMs, including StarCoder2, CodeLlama, DeepSeek-Coder, and GPT-3.5-Turbo, were tested on four tasks: code generation, code completion, API recommendation, and test case generation. CodeLlama-34B achieved the highest CodeBLEU score of 34.08 for Java code generation, and Python API recommendation saw an F1 score of 52.24.
The researchers tested the impact of adding context to the inputs provided to LCMs. Starting with basic function signatures and docstrings, they added more context (e.g., dependencies and library imports) and found that full context improved average CodeBLEU scores by 70.73% in Java and 31.90% in Python.
To assess data leakage, the team compared model performance on data created before and after the models’ knowledge cut-off dates. They found models performed better on leaked data, with average CodeBLEU scores increasing by 1.22 points in Java and 3.10 points in Python, demonstrating the importance of preventing data leakage in evaluations.
You can learn more by reading the entirepaper and accessing the ComplexCodeEvalGithub repository.