Survey Question for My FOSD Workshop Keynote

Here is a note that I emailed to colleagues.  The responses to this note are posted below, from which I quoted selectively in my presentation.  That I didn't cite everyone should not be taken that I disagree -- I simply ran out of time. As you can see, all of the comments were carefully and thoughtfully raised: they all deserve our attention.  Please note that I trimmed personal greetings from the responses: the technical core of the messages remain.

I have been invited to give a keynote at the 1st Workshop on Feature Oriented Software Development (FOSD) in October in conjunction with MODELS, GPCE, and SLE 2009. In this regard, I am contacting people who I believe have made significant contributions to the areas of software product lines, feature modeling, feature modularity, and/or feature interactions. And it is in this connection that I ask the following:

What do you think are the top 2 or 3 research or industrial problems that are central to this area? For each problem that  you identify, please write a few (3-5) sentences explaining it and its importance.

With your answers, and those of others, I hope to present the current state of the art in FOSD and its future.

I look forward to hearing from you.

 Sven Apel

1. How can we ensure that our feature implementations (e.g., feature
modules) behave as expected both in isolation and in all possible
combinations? The former case is important since FOSD can unfold its
full potential only if we can type check, verify, and compile feature
implementations in isolation. The latter case is important since two
feature implementations that behave correctly in isolation may lead to
undesired behavior in combination (feature interaction problem).

2. How can we conclude from the correctness of individual feature
implementations (and additional information such as the feature model
and behavioral specifications) that *all* their valid combinations are
correct too? The background is that the number of feature combinations
grows, in the worst case, exponentially with the number of features, so
we cannot type check and verify all combinations individually.

3. How can we automatically generate efficient software products from a
set of feature implementations based on domain knowledge and their
non-functional properties (e.g., effect on memory consumption)? The
question is how to gather, represent, and process domain knowledge and
information on non-functional properties properly in order to support
automated generation and optimization of software products. Answering
this question is an necessary step toward the vision of automatic

DeLesley Hutchins

(1) How do we type-check or otherwise verify features and feature
compositions? The answer to this question has two parts: (a) How
do we perform modular type-checking of features (i.e. compile
and verify features individually, before composition), and (b)
How do we verify compositions of features (i.e. ensure that the
requirements of each feature are satisfied, and there are no

(2) What is the best way to implement feature composition and
linking? I envision a solution much like the Java class loader,
in which features are compiled separately, and then loaded and
linked at run-time. This has many advantages over source-level
code generation, but will require a solution to (1) above.

(3) How do features interact with different kinds of artifacts,
e.g. Java, C++, Makefiles, XML, etc.? We would like to have a
uniform model of feature composition that works over all
artifacts, which is why tools like AHEAD that operate on text
files and directories are appealing. However, at the same time,
we would like to have modular type checking and separate
compilation of features for certain specific languages, like
Java, instead of a generic tool which pre-processes the source
files. We need some way of integrating both artifact-specific
and artifact-neutral tools in a coherent manner.

Peter  Sestoft

I still believe that one central problem is:

How to ensure consistency of feature selections and correctness of
generated software in a way that supports evolution and maintenance

I presented this as the "library specialization problem" at the 2006
WG2.11 meeting in Portland, OR -- a few slides from my presentation
and from the discussion are attached.

There are many proposals for feature-oriented programming, but as
far as we know, none supports all of the following: (1) static
checking that feature combinations make sense, (2) static checking
that generated code will be type correct and have no dangling names,
(3) generation of test suites specific to the chosen features; all
(4) in a manner that supports evolution and maintenance of the
software and all its features.

Or, in other words:

- Presumably some form of annotation of the source code is needed for
the tool to operate. Such annotation should not preclude
maintenance and evolution of the library source code.

- The consistency of such source code annotations should be checkable
prior to any user choices being made. For instance, if a field is
left out, it must be used nowhere in the specialized code; and if
the field is included, all uses must be consistent with its type.

- The library user's selection of properties should be checked for
consistency, so that the generated code compiles and does not fail
at run time.

- It should be possible to specialize (or cut down) the unit test
cases to suit a particular specialization of the library.

As you know, there are tools that support some of these desiderata
well. But until all are supported well, I think industry will be less
likely to use feature-oriented programming, except for very stable
domains where the software does not need to evolve.

William Cook

1) Feature oriented programming for models
2) FOP that can be adopted by industry (what's missing?)
3) Better/deeper overall theory of FOP?

Mark Grechanik

Problem 1.

When programmers develop, maintain, and evolve software to satisfy new requirements, they instinctively sense that there are existing features that are relevant to these requirements that were implemented by other developers. These features could be reused if found; however, three main problems inhibit effective software reuse: rudimentary and unsophisticated source code search engines, the lack of support for selecting retrieved code snippets from relevant applications, and the abstraction gap between low level implementations of these code fragments and pertinent high level requirements that are given to developers. Moreover, source code repositories are polluted with poorly functioning projects with ambiguous, inconsistent, incomplete, or even absent documentation. State-of-the-art code search engines, such as Google Code Search, simply match words from search queries to the names of identifiers or words in comments in the source code of projects, and these matches provide no guarantee that retrieved code snippets implement concepts or features that are described in requirements. Even if relevant features are located, developers face another daunting task of moving these features into their own applications, since these features may exhibit completely different behavior in the contexts of different applications. Finally, synthesizing new code by composing selected code fragments that represent features with each other requires sophisticated reasoning about the behavior of these fragments and the resulting code. The result of this process is overwhelming complexity, a steep learning curve, and a significant cost of building customized software.

Problem 2

Features added, modified, or removed by the development organization are erroneously reported as faults by the test center.
It is critical for the development organization and test center to stay synchronized with respect to the evolution of functionality of the software to be tested. In response to modifications in functionality, test cases must similarly be updated. Unfortunately, testers are not always made aware of such modifications. As a result, testers frequently report faults that in reality represent actual, desired functionality on the part of the development organization. Especially during the rapid delivery of new revisions of the system in response to previous faults found, new functionality might be added that must now be tested, existing functionality may be modified and need different tests, or existing functionality may be dropped altogether and should not longer be tested for. Sometimes, indeed, quite drastic shifts in functionality happen, even at the last minute, when it is not uncommon to pull an entire feature out because it is too unstable.

Stan Jarzabek

A problem I am familiar with is how to manage complex impact of features
on reusable (PLA) components (I know ,nothing new, many observed that,
the best papers describing these experiences from industry that I came across are
by Nokia people ICSE 1997 and Jan Bosch 2004 in SPLC):

1. the impact of features spreads through many components

2. One feature may affect many components, at many variation points

3. One component may be affected by many features, at many variation

4. Explosion of look-alike component versions (wait a sec - who
discovered this? Wasn't it Don and Ted?)

5. Tracing and managing feature impact during reuse and evolution is
complex, manual, time-consuming

6. Feature dependencies multiply difficulties

7. Same functionality implemented in variant forms, in many component

8. Complex, hidden dependencies among reusable components

Under the above circumstances, reuse and evolution are mostly manual, laborious.

Component technologies and architectural approaches to software reuse do
not provide means to manage features that have complex and multi-grained
impact on reusable components.

Many additional variation mechanisms are adopted, which is not easy to

Of course not all the reuse situations have the above characteristics.

Roberto Lopez-Herrejon

1) Lack of enough industry validation. Even though the problems that FOSD
has been applied to have been increasing size most of them are still in the
realm of academia. This I believe hinders credibility outside academia where
I think ultimately FOSD should end up being applied.

2) FOSD is still a technology not a methodology. We have an assortment of
theories and tools (technology) but not yet a cohesive way to approach SPL
development from requirements gathering to deployment (methodology).

Salva Trujillo

To my mind, there are a several problems. I am starting to be aware of the "gap" between research problem and industrial problem. For instance, feature identification may be regarded from a research perspective as something easy and time-ago done. However, from a practical perspective the difficulty is on extracting the features from people that is not used to product-lines, even for the idea of a whole family of programs. Saying this from my viewpoint and IMHO, my 2 cents.


The complexity of the systems is growing. I see it myself. It is not the same a gas boiler, a wind turbine system or an elevator system. However, in embedded systems functionality is shifting from hw to sw, and hence the need to deal with complexity/size. Design and models is a way to handle with it. Domain Specific Languages are intended in a similar way. The same applies to model transformations.

Economies of Scale in customization

The whole product line is based on the ground that economies of scale may be achieved on the long-term. However, this depends on the initial investment and how it pays off. The more elaborated mechanisms and foreseen variability implies a higher cost, which may probably only be worth in very large and very complex systems, where customization is demanded (and WELL paid) by customers.

Dave Thomas

As it happened I was just reading your recent talks<g>. I'm interested in learning more about your work
and agree with you that metaprogramming is an important future direction.

The major problem I come across is Feature Interaction because either due to the feature itself or the state of the code base features interact with each other at the specification or implementation level.

The other concern is that the entire concept of feature is vague and there are no tools/practices to help move wishes to features.

Mehmet Aksit

I have been consulting recently several companies that aim at developing product-lines, and/or developing domain-driven architecture design methodologies.

One of the company is consumer company developing TV sets (Not Philips).

The other one is aerospace company.

Based on the experiences, I am trying to set-up an EU project, which targets the following problem definition:

" Many software development methods have been developed so far. Each method possibly
adopt different strategies and techniques such as model driven engineering, product-line
engineering, architecture-centered design, use-case driven design, waterfall and cyclic
processes and agile software development. Regardless of the method(s) adopted, there are
various assumptions being made throughout the software development process. First of
all, marketing department and requirement engineers make certain assumptions about the
future demands, customer expectations and revenue of the product. In the early phases of
software development, architects and software engineers make assumptions about the
realization and operational phase of the product as well. Each of these assumptions may
happen to be failed or partially justified; (i) marketing estimations may happen to be
(partially) untrue in the operational phase when the product is deployed (ii) the end-user
view can be overlooked for some use cases during the requirement engineering process.
Eventually, customers may consider certain features undesired or even faulty behavior
even though the software functions according to its specifications (iii) the performance
(or other quality attribute measures) may turn out to be quite different compared to
architecture design and detailed design level estimations when the software is deployed
at the customer site (iv) despite of the software verification and testing efforts,
unexpected failures may appear, the software may not show the desired behavior.
In current practices, each software development life-cycle phase is tried to be optimized
within its own context. Traceability links are mostly syntactic while the information flow
among the phases is largely informal. However, it is essential for companies to optimize
software development as a whole from marketing to the operational phase, rather than
improving the quality only for each individual phase. It is therefore necessary to couple
the software development life-cycle phases in a more systematic way and consider the
whole life-cycle rather than optimizing each phase separately. In particular this is of
crucial importance for highly dependable systems since critical assumptions are made
throughout the life-cycle of the system. The design process of dependable systems is
especially based on strong assumptions related to how system actually works and how it
is being used (e.g. failure rates, failure criticality, safety risks)."

Yannis Smaragdakis

I believe that whatever happens in this area won't really be a solid
advance until it is integrated in mainstream tools and programming
languages in a high-level way (i.e., not macros, or obscure
meta-programming, but a change in paradigm with full language support).
This is why I work on what I work on, namely cJ (for configuring classes
using type conditionals) and MorphJ (for morphing a class in the shape
of what it is composed with). In this work I emphasize modular type
safety, so that configurable components have a meaning on their own,
without having to consider the entire program they participate in.

Jeff Gray

One of the things that has influenced my thought over the summer relates to
several bank and insurance company mergers here in Birmingham (we used to
have several corporate headquarters here, but now down to just a handful due
to consolidation). As you may imagine, such mergers have a gripping effect
on the IT infrastructure that results from the mergers such that chaos
ensues. From that perspective, the comments below are focused on the
potential for feature-oriented ideas to be applied to "systems of systems"
or legacy applications that may be forced to communicate with each other
without any planned design:

- Recovering the feature description and interaction within a
single legacy system: There has been work on using feature models to
understand and comprehend software architecture of existing legacy
applications (Pashov and Riebisch, among others), but there seems to be a
lot more research needed to mature the process. The key question is how to
recover the features and their constraints from existing legacy
applications, rather than using such diagrams and models from a first-effort
Greenfield attempt. In the context of "systems of systems" and integration
of IT systems from mergers, understanding the underlying features in a more
formal and analyzable way would seem to offer great benefit.

- Composition of applications from the same domain and the
subsequent conflicts among features and constraints: After the individual
feature models are understood for a collection of legacy applications in the
same domain, how can their merger and integration be achieved? There are
sure to be similar features in both systems - how are those merged or
removed for a single definition? There are also sure to be many constraint
violations when such applications are brought together. In a very large
organization, it could be utter chaos to manually analyze the constraint
violations and to determine resolution measures. Some science behind the
composition, supported by extensions to the ideas I see in AHEAD, could be
useful here.

- Dynamic adaptation of features among a federated collection of
services: This idea is similar to one expressed by Klaus recently in his
response to you. As there seems to be a push toward more dynamic and
adaptive systems, how do systems discover the features of other systems that
emerge in its purview: how are those features discovered, understood in
terms of their semantics, and composed to bring about new capability in a
dynamic manner that was perhaps not anticipated in the static design. This
idea has some similarity to discovery of web services and other such
interfaces. I think that a formalization of the semantics of features would
help support the discovery among systems that have complementary assets and
functionality (whole is greater than sum of the parts).

I guess the summary of the theme of the above is how feature-oriented ideas
can assist in the merging of large systems to support both static and
dynamic integration.

I think these represent top-priority industrial problems that could benefit
from more research into the area.

Betty Cheng

1. Need better support for modeling of FOS and AOS -- there's
still no consensus or even basic agreement on how to model
cross-cutting concerns/features for software-based systems.
(along with the modeling, is the need for model transformations
that can be applied in the context of MDE from these models)

2. Assurance? how do we deal with property analysis, model-based
testing, dynamic analysis?

3. what is the role of FOS with the emerging area of DAS -- dynamically
adaptive systems (including autonomic systems). These systems
are being increasingly used to manage systems that need to be
long (continuous) running, encountering adverse environmental
conditions (e.g., network outages, sensor failures, etc.). Given the
complexity of DAS, feature-oriented software development is going to be
a necessity. So all of the above problems are applicable for nonadaptive
systems, how do we begin to deal with those challenges in the face
of dynamic adaptation at run time?

4. Finally, it would be helpful to identify some "killer apps" for FOS/AOS
-- it would help the rest of the community better understand and appreciate
the value of FOS/AOS.

Jan Bosch

As you know, I changed from academia to industry 5
years ago and although I still publish actively, I am living in the reality
where beautiful theories are murdered by gangs of ugly facts, to paraphrase
a quote from, I think, Robert Glass. From that perspective, let me share
three thoughts or research directions that I believe warrant significant
research investment.

1. Product line adoption
Both at Nokia and at Intuit, I have been up close and personal with the
difficulties of introducing a new product line and associated approach in an
organization that already has several existing platforms in place, with each
platform having a proprietary way of working associated with it, including
processes, resource allocation, feature selection, deployment approaches,
etc. Each of these "ways of working" have evolved organically and were
arrived on through a process of experimentation that included failures and
setbacks. Understandably, the organization is very reluctant to let go of
proven and tried methods in favor of approaches that offer great benefits on
paper, but are, at least in the context of the company, purely theoretical.
Although several of us, including yours truly, have published articles on
product line adoption, the practicalities remain staggeringly complicated.

2. End-to-end perspective
Software product lines present the first successful approach to
intra-organizational reuse in 4 decades (IMHO) due, to a large extent, to
the broad perspective that it took in terms of taking business & business
strategy, architecture and technology, process and tools and organizational
concerns into account. Over the last years, however, I have observed a trend
where research in SPLs is increasingly focusing itself on more and more
narrow and, from my now industry influenced perspective, arcane problems. We
have lost the end-to-end perspective in two important areas, I believe:

2.1 business - technology alignment
The business side of the house and the technology/product development side
of the house tend to be at odds with each other in many companies that I
have worked with or for. Research into improving the alignment and enabling
decision making that optimally incorporates both sides is valuable.

2.2 core vs. context decisions
In several cases I have seen product line initiatitives where the shared
software assets are not domain specific, highly value adding components that
codify the core of the company's added value, but rather generic, domain
independent components that could and in fact should be sourced from the
outside instead of developed in house. This still happens because (1)
several years ago, the functionality captured by the components was core for
the company, even though it has moved on since then, and (2) engineers are
engineers and like to build stuff instead of integrate it. We need tools
that objectively provide insight into the value of shared asset investments.

3. Software ecosystems
To make the 3rd point, I attached a paper. I believe the next step for
software product lines is beyond the borders of the company into the realm
of software ecosystems. Here in the valley, I see a host of companies adopt
that approach, including Facebook, Apple, Intuit, Google, SalesForce, etc.
Several of the research challenges that apply to SPLs become reinforced in
the world of software ecosystems and I believe that, over the coming years,
we'll see SPLs being morphed into software ecosystem platforms and we need
the research to make sense of that new world.

Christian Kaestner

* Multidimensional separation of concerns (or more strictly the problem of
modularizing multiple optional features including their interactions) is
still unsolved. There are several approaches (hyper/j, aspects, origami,
effective views, ...) but none of them seems practical or be scaling.
Convincing examples on a larger scale are still missing. Is is language
problem, a design problem, or a tool problem? Is modularity a realistic goal
after all?

* Safety & testing is a crucial issue: With a product line, we essentially
develop millions of potential products in parallel, can we also test all of
them in paralell? Can we scale product lines and automated generation to a
level where we don't have to test every individual product but can ensure
certain properties for the entire SPL? Can we automatically detect feature
interactions? Although there is already a lot of work on product line
testing, much still needs to be done, and especially specifications and
formal methods will play a crucial role.

* Mainstream programming languages offer only little explicit support for
implementing variability. Variability mechanisms are generelly not well
supported in tools (IDEs, visualization techniques, type checkers, ...).
Here the important questions are: how can we convey variability to the user?
Can tools help developers understanding the impact of variable
implementations, e.g., how features interact? What tool support can be
provided for product line implementations?

Carlo Ghezzi

From a research standpoint (motivated by emerging practical
scenarios), I believe that most of the challenges arise from the fall
of the strict boundary wall between development time and run time.
Assumptions made at development time might be invalidated at run time
(for example, think of performance issues due to unanticipated user
profiles). Applications will need to adapt and evolve at run time,
wile offering service. This will imply serious rethinking of
traditional software engineering methods and practices, that are based
on the assumption that such strict boundary exists. Such as:

- verification and validation need to extend to run time. Monitored
behaviors will need to be checked against the expected behaviors.
- models, traditionally viewed as development time entities, will need
to be kept at run time, to support the necessary verification.
- detected changes and deviations with respect to the expected
behavior will need to trigger automatic adaptation at the architecture
and implementation level. This means that a feedback loop must be
established from run time observations back to changes in the software.

Klaus Ostermann

everybody thinks that the problems he or she are working on are the
"most important in research or industry", hence let me just say
something about the problems I find most interesting :-)

I think a major open prolbem in FOP and related approaches is
the reconciliaton with modularity. This may sound contradictory, since
the goal of FOP is to increase the modularity, but on the other hand FOP
is also often ani-modular in that it presupposes a global view on the
software or software domain. For example, feature models are typically
monolithic and not decomposed into independent building blocks with a
clear interface.

If we take the terminology of FOP, I envision a form of software
development where feature models can be decomposed in multiple ways
(parallel, hierarchical) while still retaining the modularity of the
feature implementations themselves.

A second issue: I believe that FOP should be unified with
domain-specific languages. A feature model is the same thing as a DSL,
and an instance of a feature model is the same thing as a program
written in a DSL.

A third issue: I believe that, whatever technology is used for FOP, it
must be an integral part of the underlying programming language, rather
than being implemented via external toolchains.

Patrick Heymans

Here is my biased hit parade (including contributions from my team).
1. First, a very general but crucial one: tighten the links between industry and academia. In the end, we need to deliver techniques that will make practioners work better (easier, quicker, safer...). Collaborative projects will help researchers better understand the needs of practioners, evaluate and improve research ideas and results, and help practitioners realize the capabilities of feature-based development  techniques. Empirical questions that could be asked are: Do practitioners use feature modeling languages in development of software product lines? If not, why not: what are the obstacles to using feature modeling languages in practice? If they do, what sort of languages do they use, and what are the limitations they are encountering in using those languages? My experience until now showed that two important problems are scalability and linking feature models to other (e.g. UML) models. See below.
2. Making feature models more scalable (when editing, reading, analysing them or using them for configuration). This mainly involves developing and evaluating appropriate techniques that improve modularity and separation of concerns. A key issue is to provide formal semantics both locally (intra-module/concern semantics) and globally (inter-module/concern semantics). This will allow to:
3. Improving the integration of feature models with other kinds of models such as UML (class, activity, state...) models to automate model-driven PL engineering. This involves formalizing the links between those notations, i.e., providing integrated formal syntax and semantics, and defining syntactic and semantic analyses and automating them. Currently, research on this topic has focused too much on syntactic issues.

Paul Gruenbacher

here are two problems we are facing in our ongoing industry collaborations:

Product line evolution -- new customer requirements, technology changes, and internal enhancements lead to the continuous evolution of a product line. Maintenance and evolution are particularly critical due to the longevity of many systems. Evolution support becomes success-critical in model-based development to ensure consistency after changes to meta-models, models, and actual product line artifacts. Product line engineering should thus treat evolution as the normal case and not as the exception.

Structuring the modeling space -- no matter which modeling approach is followed, developing a single model of a product line is practically infeasible due to the size and complexity of today s systems. The high number of features and components in real-world systems means that modelers need strategies and mechanisms to organize the modeling space. A particular challenge lies in understanding and modeling the dependencies among multiple related product lines.

Andy Schurr

We are mainly interested in testing SPLs
and my impression is that we are just starting to understand how
to generate test cases systematically, how to define metrics for
SPLs etc.

Srinivas Nedunuri

1. How can we be sure of (or at least aware of) the unintended consequences
of adding a feature? In regular software development, someone looks at the
original code, thinks about how the code for the feature would be, and if
they are reasonably smart, can get some idea of what the consequences of the
change would be (though not always - think of concurrent s/w dev). With
feature development, the internals of the feature are opaque. At most there
will be something that says which variables are affected. This is really not
sufficient. Something akin to a spec for a feature is needed. The problem
with that is that each feature will use its own terminology (and may even be
written in a different language). What one feature calls empId might just be
an alias for employeeId etc.

2. For feature driven dev to be practical there must be a library of at
least 10's of thousands of features - perhaps even 100's of thousands. How
would a developer even begin to know where to start looking for what they
have in mind? He may have some idea of what functionality he have in mind,
but I have no idea of how the library designer may have sliced that
functionality into features - presumably there's no one way. When they do
eventually find something that looks close, how can they tell how close it

3. Feature modification - having found a feature that's reasonably close to
what you're trying to do, how do I modify it or wrap it? Am I back to custom
code again?

Markus Voelter

here are some of my thoughts:
* Integrating Feature Models and other DSLs
   There's a big gap between the conceptual and tooling
   space for describing variability with feature models
   and with DSLs. It is very hard to describe a system
   using a mix of feature models and DSLs. I cannot embed
   a "DSL program" in a feature model in order to provide
   detailed configuration for a feature. It is not main-
   stream to be able to express variability on "DSL programs".
   Combining the two in a practically useful way would help
   me quite a bit.
* Scaling Down PLE.
   PLE is still considered heavy-weight, un-agile and BDUF.
   Many potentially interesting users are tiurned off by the
   fact that they think they will first have to plan 4 years
   before writing some code and creating a product.
   So, making PLE align better with Agile etc. would be a
   very helpful contribution.

Jean-Marc Jézéquel

1) convergence of the domains of Dynamically Adaptive Systems and
Dynamic SPL Klaus already mentioned it, and I believe working on this convergence
can be very interesting. For more details see our ICSE'09 paper on
"Taming Dynamically Adaptive Systems Using Models and Aspects"

2) behavioral semantics issues in composing SPL features
We have done some initial work using HMSCs as a testbed, but much
remains to be done.

3) Handling variants on extra-functional features (such as security,
performances, etc.) when they are cross-cutting the rest of the SPL.
Clearly Aspects have a role to play there. However it is now clear that
AOP (as in Aspect-J) is not up to the task, but we should not throw the
baby out with the bath water. We still believe that AO Modeling and AO
design could have a role to play there

Ulrich Eisenecker

There are several interesting topics of SPLs in research as well as
industry. Just let me pick some being of special interest to me.

1. Economics of software product lines.
How can and should systems developed on basis of a SPL be priced,
licensed, and billed? Depending on the lifecycle of the SPL it is
conceivable that pricing is based on the value features provide for a
customer. Futhermore it is conceivable, that some selected features are
provided and/or licensed differently. E.g. very infrequently used
featers, perhaps 3d product visualization in a web-shop, could reside as
a web-service on a remote server, licensed and priced on a per-use basis
and billed/payed via micropayment. Thus, licensing and billing become
even an additional source of variability in a SPL.

2. Automating the creation of non-code artifacts in SPLs
Currently the creation of most code-related artifacts in an SPL ist
automated. But there are also many non-code artifacts related to a
product, most notably user manuals, but also technical references,
training material, etc. If a highly customer specific product can be
created based on a SPL it is highly desirable, that it is also
accompanied with a product-specific documentation and not with a manual
including many variants, most of which the users product does not have.
The effort required for creating such non-code artifacts is high, if not
prohibitive. Thus it is very desirable to achieve progress in this area,
thus serving customers better and further reduce cost and time for
creating product-specific documentation.

3. Integration of software poduct lines
Some complex products can and should be built on the basis of several
SPLs, e.g. a e-learning system comprises functionality of discussion
forums, web space, video-conferencing, chatt, editing and presentation
of learning material, editing, configuring and executing tests, learner
administration, etc. Components realizing these parts of functionality
could be also created on basis of a SPL. Which methods support a proper
modularization and focusing of these sub-domains? How should be the
relations and dependencies among these sub-domains be modeled? What is a
sound technical basis for integration the configuration/creation of
complex products based on multi-product lines?

George Heineman

For me the sticking point is proper testing of the many product line
members one can create. About ten years ago I was discussing my research on
component adaptation to my brother who was a product manager at
Fidelity. After hearing of my ideas he stopped me short by saying that he
had no room for technology that would dynamically change the way the code
executed. If the code couldn't be tested thoroughly, he would not allow it
to go into production.

The challenge has to be how to properly specify individual product line
members. Consider two product line members with different features:

P1 = F1 * F2 * F3
P2 = F1 * F3 * F4

There are numerous technologies for ensuring the features are functionally
operative in the two product line members, but the deeper challenge is to
understand how to specify the interaction of features within the product
line members. It could be, for example, that F1 * F3 compose together
beautifully EXCEPT when in the presence of F4.

Testing depends on the ability to state IN ADVANCE the expected
output or behavior so a test case can confirm the actual output. It is
unlikely that poor humans would be able to properly specify these
interactions, which is why testing may solely be focused on the
accumulative test cases to simply ensure that feature Fi, when present, is
operating properly.

Wouldn't it be great to have the ability to generate all possible
combinations of features and pass these configurations into a tool that
would identify 'detectable interactions' and generate appropriate test
cases to ensure proper behavior was being managed? I feel nervous about
trusing any auto-generated test cases, but I keep coming across papers
whose titles suggest they have techniques to generate test cases, so
perhaps this is not as far-fetched as one might think.

Ewen Denny

As an outsider to this area, what I would be interested in understanding
the connections between the feature-oriented paradigm and other
approaches to modeling and code generation.

For example, even with product lines we have decision-oriented models.
Is this equivalent to/subsumed by feature models?

Egon Boerger

The first problem in importance seems to me to be the following one: (further) develop (and provide tool support for) PRACTIAL means for linking in a coherent and rigorous (thus verifiable) manner the different life-cycle phases of software product line development - in such a way that these links are easily adaptable to changes that may happen at any level in the lifecycle. This is a refinement issue and you guess correctly that I believe the ASM method could be helpful here as a means to bring together in a single framework declarative (logical or functional) and operational techniques. You know that by practical I mean usable by the software engineer (not a formal methods specialist) in his daily work, including the verification (documentation or justification of correctness) part.

The second problem I see is an old one, which I believe is largely still with us, despite of a lot of research that has been done, and is related to the first one: namely to define more powerful rigorous interface description techniques which a) are not limited to purely functional descriptions, but include means to abstractly describe underlying assumptions or dependencies on state features, and b) can be manipulated algorithmically for checking purposes (run-time verification).

In both cases I believe the problem is a research and also an industrial problem, if we are looking for software that is not only running, but also well documented (so that changes can be made with ease and reliably by other people than the original designers) and inspectable for major properties of interest.

John McGregor

One issue that I did not see anyone else raise yet is the
configuration/change management practice.

Sharing assets among multiple products is a formidable task. It is our
experience that most new product line organizations underestimate the
effort required for this activity. Many SPLs dissolve into a tangle of
branches and merges. We are working to develop a discipline in this area.
Obviously the exact form of the SPL and its methods determine the CM
practices required.