InfoQ Homepage Presentations The Case for Evolvable Software
The Case for Evolvable Software
Summary
Stephanie Forrest believes in applying evolutionary biology principles to the software process creating evolvable software through automated bug repair, improving code and creating new combinations of existing functionality.
Bio
Stephanie Forrest is Professor and Chairman of Computer Science at the University of New Mexico and a Research Professor at the Santa Fe Institute where she is an external faculty member. Before that she was a Director's Fellow at the Center for Nonlinear Studies, Los Alamos National Laboratory. She received her Ph.D. in Computer and Communication Sciences from the University of Michigan.
About the conference
SPLASH's mission is to engage software innovators from all walks of life -- developers, academics and undeclared -- in conversations about bettering software. Bettering software involves new ideas about programming languages, tools, conceptual models, and methodologies that can cope with, evolve, and leverage, the complex software-intensive socio-technical system of systems that has emerged in front of our eyes during the past decades. Bettering software requires a deep understanding of the nature of these systems, an understanding that rides on the trends of the moment, but that goes well beyond. These are the topics of SPLASH.
Community comments
measure of robustness flawed.
by Steven Soroka,
Re: measure of robustness flawed.
by Melle Koning,
Re: measure of robustness flawed.
by Hans-Peter Störr,
Re: measure of robustness flawed.
by Melle Koning,
Re: measure of robustness flawed.
by Hans-Peter Störr,
measure of robustness flawed.
by Steven Soroka,
Your message is awaiting moderation. Thank you for participating in the discussion.
Just because changing random parts of a program causes 5-10 tests not to fail doesn't mean you haven't crippled the code in some way; this doesn't imply robustness. It's more than likely you just don't have a complete enough suite of tests to notice the change.
Re: measure of robustness flawed.
by Melle Koning,
Your message is awaiting moderation. Thank you for participating in the discussion.
Well,
If production code evolves like this you probably mean 'not readable' with 'crippled'. Correct?
The question is if that matters. As long as you can understand your the written tests, and are able to write new test-cases, than knowing exactly what to change in the production code might be less relevant; the computer might be able to figure that out.
Cheers,
Re: measure of robustness flawed.
by Hans-Peter Störr,
Your message is awaiting moderation. Thank you for participating in the discussion.
No, I suppose he means crippled in the sense of broken, i.e. the code has new bugs the tests did not check for. If you really start to make effectively random code changes that are not reviewed by humans, you would need 100% test coverage for the whole code. That is, thousands of tests. Unfortunately I have yet to see a project with that kind of test coverage. :-}
Re: measure of robustness flawed.
by Melle Koning,
Your message is awaiting moderation. Thank you for participating in the discussion.
Hi Hans-Peter,
I understand what you mean. Now just for the sake of following the proposal of evolving software: suppose that the job of a software engineer would be to add new tests only and -not- touch the production code anymore.
Would the software engineer be able to steer the evolution of the productioncode with newly written unittests? Could we add new features by writing unit tests that do not pass (yet!)?
Personally, I think we can't, because if we would add a new test, we are never sure if it is a 180 with a previous test that was written by somebody else. In other words, evolving software would then never be able to pass all tests...simply because new ones could be totally the opposite of existing tests.
Cheers,
Re: measure of robustness flawed.
by Hans-Peter Störr,
Your message is awaiting moderation. Thank you for participating in the discussion.
What you are describing is actually a well known practice called test driven development for writing high quality software. First you write (so far failing) tests for new/modified functionality, and then adapt the code until the tests pass. You are right: If you have many tests, you will usually have to fix bugs in the tests as well.
When writing software, you usually have two somewhat othogonal "safety nets" to ensure correctness: first, the developer thinks carefully about the code when changing it, and second, one writes tests to verify the whole thing. If you remove the first safety net by evolving the software automatically, you need to strengthen the second net considerably. It is not clear to me whether this actually saves human effort in the end.
Perhaps there are special areas where it is very easy to write and maintain many tests, or where there is some kind of "training area" for the evolving system, or where bugs are inconsequential and can be fixed automatically right away. Maybe the evolving system serves as a special kind of machine learning.