Wicsa7:Workshop:Empirical Assessment in Software Architecture
From WICSA Conference Wiki
Monday, 18 February, 2008, 9:00-12:30
Workshop Theme and Motivation
Software engineering researchers and practitioners have been increasingly emphasizing the importance of gathering and disseminating empirical evidence to help researchers to assess current research, identify the promising areas of research and practitioners to make informed decisions for selecting a suitable method, technique, or tool for supporting a particular software development activity. However, there has been relatively little effort to gather and use empirical evidence to support the claims of efficacy or capabilities of different methods, techniques, and tools proposed for developing software. In order to improve this situation, there has been growing recognition of the importance of providing community-based forums to debate the need and value of comparative evaluation of technologies proposed to support software development activities using evidence-based approaches. The evidence-based paradigm provides an objective and structured means of assembling and analysing the available data in order to answer research questions. The aim of the proposed workshop is to debate the importance, benefits, and limitations of rigorously assessing software architecture research outcomes by utilizing the methods and approaches from the evidence-based paradigm. The workshop will provide attendees with an opportunity to critically discuss the suitability of different assessment mechanisms, techniques and methods for software architecture discipline.
Questions to stimulate discussion:
The workshop participants are expected to propose and debate several questions related to the assessment of software architecture technologies. Some of the questions to be discussed are:
- How are the software architecture technologies evaluated?
- How should software architecture research outcomes be assessed to support technology transfer?
- What are the most appropriate mechanisms and methods to assess and compare software architecture deign and evaluation technologies (methods, techniques, and tools)?
- What is the role of empirical methods for software architecture research and practice?
- How to empirically assess the usability and usefulness of software architecture technologies (e.g., Architectural description languages) within industrial settings and considering cost bounds?
- How to support the quality assessment of software architecture technologies during the different phases of the software lifecycle?
- To what extend software architects and project managers should rely on existing software metrics and traditional quality indicators?
- What can/should be done to facilitate empirical study on software architecture?
How do you use empirical assessment in software architecture?
Participants are asked to contribute a concrete example of how you use empirical assessment in practice.
- Systematic review of SA literature for discussions on / definitions of `Architectural Knowledge' (Remco C. de Boer, Rik Farenhorst)
Abstract: The software architecture community puts more and more emphasis on ‘architectural knowledge’. However, there appears to be no commonly accepted definition of what architectural knowledge entails, which makes it a fuzzy concept. In order to obtain a better understanding of how different authors view ‘architectural knowledge’, we have conducted a systematic review to examine how architectural knowledge is defined and how the different definitions in use are related. From this review it became clear that many authors do not provide a concrete definition of what they think architectural knowledge entails. What is more intriguing, though, is that those who do give a definition seem to agree that architectural knowledge spans from problem domain through decision making to solution; an agreement that is not obvious from the definitions themselves, but which is only brought to light after careful systematic comparison of the different studies.
- Analysis of SA documentation, looking for patterns(Neil Harrison, Paris Avgeriou)
Abstract: We have evaluated architectures based on whatever architecture documentation is available, looking for architecture patterns. We have been able to identify patterns used from the structural diagrams. From the patterns, we have identified strengths and weaknesses of the proposed or actual solutions with respect to fulfilling the quality requirements of the system. A related issue is that we have found that architecture documentation rarely follows any prescribed documentation methodology (4+1, Bass, etc.) People seem to capture what makes sense to them, ignoring documentation methodologies.
Please add yourself to this list by selecting the edit link to the right. Tell us something about your background. Add a few sentences about the working session topic such as your position, questions you would like to see discussed, etc. Comment on the questions below. Tell us how you use empirical assessment in practice.
You may want to select the watch tab above so you will be notified of changes to this page (e.g., to follow discussion, be notified when the framework is posted)
- Davide Falessi
- Rik Farenhorst
- Remco de Boer
- Neil Harrison
- Hasan Sozer
- Bedir Tekinerdogan
- Byron Williams
- Eltjo Poort
- Pieter Botman
Presentation Titles and Presenters
The workshop featured five invited talks:
- Conducting Systematic Literature Review for Defining Architectural Knowledge
- Remco de Boer
- An Experimental Approach to Evaluating Architectural Knowledge Management Tools
- Paris Avgeriou
- Empirical Study of Architectural Influence on Requirements Decisions
- Nazim Madhavji
- On Difficulties in Doing Empirical Research in Software Architecture
- Davide Falessi
- Empirical Research for Assessing Software Architecture Change Categories
- Byron Williams
Inspired by the presentations, the workshop participants identified a list of interesting topics in this area:
- general issues
- what type of evidence do we need? positive vs. negative evidence?
- to what extents is SA empirical research different? more qualitative? major dependent on stakeholders? need historical data?
- what type of (historical and evolution) data do we need?
- what (standard) set of metrics are needed?
- appropriate methodologies/criteria for assessing SA research outcomes
- how to convince decision makers in industry?
- benefits/limitations of empirical approaches for assessing SA research
- what education and what training do we miss for “empirical researchers”?
- specific issues
- systematic reviews: what challenges? what domain knowledge needed to setup the protocol? what evidence do we need? quantitative or qualitative?
- industrial experiments: to what extents does tool support help architecting work? how to design such experiments?
- how to define and measure effects of SA on requirements decisions? what are the (relevant) architectural 'aspects'?
- what types of industrial projects, and how to characterize them?
- can we identify standard characterization schemas for empirical SA assessments (e.g. for SA changes)
- overall aim
- can we aim at a reference model of data and metrics?
The participants decided to split up in three parallel working teams, as follows:
TEAM RED (Special Characteristics of Empirical Studies on SA)
The team focused its discussion on what is special about software architecture with respect to empirical studies. We came up with the following characteristics of software architecture that may play a role in designing and conducting empirical studies
- It has a subjective nature (it is in the eyes of the beholder). Examples of the subjectivity include:
- Documentation is directed towards the stakeholders that are supposed to consume it.
- It is difficult to measure the impact of SWA.
- The number and type of issues found in architecture review
- It is abstract
- A model of the system we are building and not the system itself
- It is strongly human-oriented.
- Other software engineering fields are also human-oriented (e.g. requirements engineering) but in a different way
- The products of architecting are quite diverse
- E.g. Decisions, models, new requirements
- The process of architecting is a highly creative, ad-hoc, unstructured activity
- The number and diversity of stakeholders directly involved and interested in the process and the product.
- It attempts to bridge the problem and the solution space
- It has high impact on the quality attributes
Beware of the fact that the stakeholders of the architecture are not the same as the stakeholders of the empirical research
- The latter is mostly the manager, the architect and other researchers
Subject of study
The subject of an empirical research in SWA can be one of the following
- An architecting process
- An architecting product (i.e. a specific software architecture)
- An architecting tool
- The field of SWA (e.g. through a literature review)
TEAM PURPLE (Evidential Support for Decision Making)
TEAM GREEN (Reference Framework of Methods+Metrics+Data)
Team GREEN started with the general question: "what reference methods, metrics, data do we need?"
To this end, we roughly followed the Goal-Question-Metric approach.
The goal we would ultimately like to achieve by our empirical assessment efforts is: Improve the success of systems by architectural means.
Discussing the meaning of success, we arrived at the subgoals to improve the following:
- success of projects (if and only if fulfills requirements - implicit or explicit)
- success of systems (broad, system life cycle)
- success of architects/stakeholders (e.g. domain knowledge, competences)
In order to be able to ask the right questions, we first need to establish a mental model of the systems, their architectures and other related aspects, like the projects they were realized in, the context they are running in (systems of systems? organizations?), etc.
It is important to make a distinction here between the attributes and requirements on the System S and the attributes and requirements on the system's Architecture A. Architecture here is seen as an abstract entity with its own attributes, things like the quality of the architectural description, its complexity, whether or not it was evaluated, etc. A's attributes are related to S's attributes, but they are distinct. In the same way, A has its own stakeholders, which may be different from S's stakeholders. One very important attribute of the Architecture is the extent to which the system fulfills its explicit and implicit requirements.
Seen in this light, the goal set above leads to questions like the following:
- what quality attributes have been achieved in systems, and how do they relate to the quality attributes of the system's architectures and architectural descriptions?
- how were these systems realised (e.g. project aspects), what is the cost of developing the systems, what is their total cost of ownership?
- what are the critical attributes of the architecture that lead to a system's success?
Other points that came up:
- the questions (and the metrics) must be independent from the "current" definition of SA: e.g. a question like "was a three-tier architecture used" is too volatile - the questions need to be on a more stable level of abstraction
- in current reference viewpoints models, we are missing viewpoints capturing the process needed in architecting, e.g. what support stakeholders need (e.g. AK sharing, communication, decision making) and link to the 'product' viewpoints
- we may address different 'levels' of architecture: system, reference, general.
- metrics must be applied across the whole SLC of SA (across time) ==> what dimensions/viewpoints?
- object of measurement
- SA/Product (relative versus absolute quality def.) the latter feasible at all?
- projects (project management goals): e.g. time (again!!), budget (IT portfolio?)
- tool support?
- metrics can be:
- architecture-centric (how good is SA "as is"?)
- handle architecture as instrumental (does SA support requirements? again external reqs or internal reqs)
Some Useful References
- V.R. Basili, R.W. Selby, and D.H. Hutchens, Experimentation in Software Engineering, IEEE Transactions on Software Engineering, 1986. 12(7): pp. 733-743.
- B.A. Kitchenham, et al., Preliminary guidelines for empirical research in software engineering, IEEE Transactions on Software Engineering, , 2002. 28(8): pp. 721-734.
- A. Jedlitschka and D. Pfahl, Reporting Guidelines for Controlled Experiments in Software Engineering, Proceedings of the International Symposium on Empirical Software Engineering, 2005.
- B. Kitchenham, et al., Evaluating guidelines for reporting empirical software engineering studies, Empirical Software Engineering, 2008. 13(1): pp. 219-221.
- C.B. Seaman, Qualitative methods in empirical studies of software engineering, Software Engineering, IEEE Transactions on, 1999. 25(4): pp. 557-572.
- B. Kitchenham and S. Charters, Guidelines for Performing Systematic Literature Reviews in Software Engineering, Tech Report EBSE-2007-1, Keele University, UK, 2007.
- B. Kitchenham and S.L. Pfleeger, Personal Opinion Surveys, in F. Shull, J. Singer & D. Sjøberg (Eds.), Guide to Advanced Empirical Software Engineering, Springer-Verlag, 2008, pp. 63-92.
- D. Perry, S.E. Sim, and S. Easterbrook, Case Studies for Software Engineers, Proc. of the 26th Int'l Conference on Software Engineering, 2004.
- B. Kitchenham, L. Pickard, and S.L. Pfleeger, Case Studies for Method and Tool Evaluation, IEEE Software, 1995. 12(4): pp. 52-62.
- J. Singer and N.G. Vinson, Ethical Issues in Empirical Studies of Software Engineering, IEEE Transactions on Software Engineering, 2002. 28(12): pp. 1171-1180.
- T.C. Lethbridge, S.E. Sim, and J. Singer, Studying Software Engineers: Data Collection Techniques for Software Field Studies, Empirical Software Engineering, 2005. 10: pp. 311-341.
- P. Brereton, et al., Lessons from applying the systematic literature review process within the software engineering domain, Journal of Systems and Software, 2007. 80: pp. 571-583.
- D. Sjøberg, T. Dybå and M. Jørgensen, The Future of Empirical Methods in Software Engineering Research, 29th International Conference on Software Engineering (ICSE'07), Minneapolis, Minnesota, USA, 20-26 May, Future of Software Engineering (FoSE’07), in L. Briand and A. Wolf (Eds.), IEEE Computer Society Press, 2007, pp. 358-378.
- T. Dybå, B. Kitchenham and M. Jørgensen, Evidence-based Software Engineering for Practitioners, IEEE Software, 2005, 22(1): pp. 58-65.
- B. Kitchenham and S.L. Pfleeger, Principles of Survey Research, Parts 1 to 6, Software Engineering Notes, 2001-2002.
- F. Shull, J. Singer and D. Sjøberg (Eds.), Guide to Advanced Empirical Software Engineering, Springer-Verlag, London, 2008.
- C. Wohlin, et al., Experimentation in Software Engineering: An Introduction. 2000: Kluwer Academic Publications.
- B.J. Oates, Researching Information Systems and Computing, Sage Publications, London, 2006.