Debugging code is a perennial headache for software developers, but scientists have announced a new technique that could make the process significantly easier.
Developed at MIT’s Computer Science and Artificial Intelligence Laboratory and the University of Maryland, the method essentially bridges the gap between the traditional technique of symbolic execution and today’s modern software, making it possible to debug code far more efficiently.
Symbolic execution is a software-analysis technique that can be used to locate and repair bugs automatically by tracing out every path a program might take during execution. The problem is, that technique doesn’t tend to work well with applications written using today’s programming frameworks.
That’s because modern applications generally import functions from those frameworks, which include huge libraries of frequently reused code. Analyzing just the application itself might not be a problem, but the process becomes prohibitively time-consuming if the analyzer also has to evaluate every possible instruction for, say, adding a button to a window, including the position of the button on the screen, its movement when a user scrolls up and down, the way it changes appearance when it’s pressed, and so on.
“Forty years ago, if you wanted to write a program, you went in, you wrote the code, and basically all the code you wrote was the code that executed,” said Armando Solar-Lezama, an associate professor at MIT, whose group led the work. “Today, you go and bring in these huge frameworks and these huge pieces of functionality that you then glue together, and you write a little code to get them to interact with each other. If you don’t understand what that big framework is doing, you’re not even going to know where your program is going to start executing.”
To get around the problem, computer scientists often go through a time-consuming and error-prone process of creating models of the imported libraries that describe their interactions with new programs but don’t require their code to be evaluated line by line. In the new study, presented last week at the International Conference on Software Engineering, the researchers created a system that constructs those models automatically.
Dubbed Pasket, the system produced promising results.
“The scalability of Pasket is impressive — in a few minutes, it synthesized nearly 2,700 lines of code,” said Rajiv Gupta, a professor of computer science and engineering at the University of California at Riverside. “Moreover, the generated models compare favorably with manually created ones.”