Skip to yearly menu bar Skip to main content


Spotlight Poster

Learning Performance-Improving Code Edits

Alexander Shypula · Aman Madaan · Yimeng Zeng · Uri Alon · Jacob Gardner · Yiming Yang · Milad Hashemi · Graham Neubig · Parthasarathy Ranganathan · Osbert Bastani · Amir Yazdanbakhsh

Halle B
[ ]
Thu 9 May 1:45 a.m. PDT — 3:45 a.m. PDT
 
Spotlight presentation:

Abstract:

With the waning of Moore's law, optimizing program performance has become a major focus of software research. However, high-level optimizations such as API and algorithm changes remain elusive due to the difficulty of understanding the semantics of code.Simultaneously, pretrained large language models (LLMs) have demonstrated strong capabilities at solving a wide range of programming tasks.To that end, we introduce a framework for adapting LLMs to high-level program optimization.First, we curate a dataset of performance-improving edits made by human programmers of over 77,000 competitive C++ programming submission pairs, accompanied by extensive unit tests.A major challenge is the significant variability of measuring performance on commodity hardware, which can lead to spurious "improvements".To isolate and reliably evaluate the impact of program optimizations, we design an environment based on the gem5 full system simulator, the de facto simulator used in academia and industry.Next, we propose a broad range of adaptation strategies for code optimization; for prompting, these include retrieval-based few-shot prompting and chain-of-thought, and for finetuning, these include performance-conditioned generation and synthetic data augmentation based on self-play.A combination of these techniques achieves an average speedup of 5.65 times on CodeLlama-13B and 6.86 times on GPT-3.5, surpassing the best human performance (4.06 times).We find our proposed performance-conditioned generation is particularly effective at improving performance as well as increasing the fraction of optimized programs.

Chat is not available.