Model Driven Architecture

This thesis (“Model Driven Architecture – Test Methods and Tools”) describes methods and tools available to test products developed with Model Driven Architecture (MDA) frameworks. The purpose of the research presented in this thesis is to find appropriate methods and tools available to test products developed in an MDA compatible way in an industrial setting. To find appropriate methods and tools a literature study as well as a case study were conducted at Ericsson. The results of the case study show that there exist important criteria both from an MDA perspective and Ericsson’s own perspective. Based on the criteria a set of tools were evaluated and the results were that Pathfinder PathMATE in conjunction with Rational Software Architect were the most appropriate tools for Ericsson to use when testing their MDA application.

Reference URL 1: Visit Now

Author: Renas Reda, Yusuf Tözmal

Source: Blekinge Institute of Technology

Contents

1 INTRODUCTION
1.1 BACKGROUND
1.2 AIM
1.3 OBJECTIVES
1.4 RESEARCH CONTEXT
1.5 LIMITATIONS
1.6 RESEARCH METHODOLOGY
1.7 RESEARCH QUESTIONS
1.8 EXPECTED OUTCOMES
1.9 THESIS OUTLINE
2 MODEL DRIVEN ARCHITECTURE
2.1 UNIFIED MODELING LANGUAGE
2.1.1 Executable UML
2.1.2 UML profile
2.2 MDA MODELS
2.2.1 Computation Independent Model
2.2.2 Platform Independent Model
2.2.3 Platform Specific Model
2.3 TRANSFORMATIONS
2.4 METAMODEL
2.4.1 OMG’s 4-layer metamodeling Architecture
2.4.1.1 Layer M0: Instances (User data)
2.4.1.2 Layer M1: The Model (User concepts)
2.4.1.3 Layer M2: Metamodel (UML concepts)
2.4.1.4 Layer M3: Meta-metamodel (Meta Object Facility)
2.4.2 Meta Object Facility (MOF)
2.5 EXISTING SOFTWARE DEVELOPMENT PROCESSES MODELS
2.5.1 The waterfall model
2.5.2 Rational Unified Process
2.5.3 Extreme Programming
2.6 THE BENEFITS OF MDA
3 TESTING AT ERICSSON
3.1 TEST TOOLS
3.2 INTEGRATED TESTING PLATFORM
3.3 IMPORTANCE OF TTCN-3 IN ERICSSON
4 TESTING METHODS
4.1 TEST CASE GENERATION FROM STATECHART DIAGRAMS
4.1.1 Usage of the method
4.2 TEST CASE GENERATION FROM REQUIREMENTS
4.2.1 Usage of the method
4.3 MODEL SYNTAX VERIFICATION
4.3.1 Usage of the method
4.4 COMPARE GENERATED SCENARIO WITH EXPECTED
4.4.1 Usage of the method
4.5 TEST CASES DEFINED WITH SEQUENCE DIAGRAMS
4.5.1 Usage of the method
4.6 TEST CASES DEFINED WITH UML 2 TESTING PROFILE
4.6.1 Usage of the method
4.7 TEST CASES GENERATED FROM UML MODELS AND C++ CODE
4.7.1 Usage of the method
4.8 TEST CASES GENERATED FROM USE CASE AND SEQUENCE DIAGRAMS
4.8.1 Usage of the method
4.9 TEST CASES GENERATED FROM ACTIVITY DIAGRAMS
4.9.1 Usage of the method
4.10 VISUAL RUNTIME ANALYSIS
4.10.1 Usage of the method
4.11 CODE BASED TEST METHOD
4.11.1 Usage of the method
5 TOOLS
5.1 CONFORMIQ
5.2 COMPUWARE OPTIMALJ
5.3 COW SUITE
5.4 ECLIPSE TPTP
5.5 I-LOGIX RHAPSODY
5.6 PATHFINDER PATHMATE
5.7 RATIONAL ROSE REALTIME
5.8 RATIONAL SOFTWARE ARCHITECT
5.9 TELELOGIC TAU G2
5.10 TITAN
5.11 T-VEC REQUIREMENTS-BASED AUTOMATED VERIFICATION (RAVE)
5.12 SUMMARY
6 THE EVALUATION OF TEST TOOLS
6.1 CASE STUDY DESIGN
6.1.1 Case study operation
6.1.2 Selection of subjects
6.1.3 Questionnaires
6.1.3.1 Prioritization of criteria
6.1.3.1.1 The Hundred-Dollar Test
6.1.3.1.2 Analytical Hierarchy Process
6.1.3.1.3 Planning Game
6.1.3.2 Background information
6.1.4 Execution of case study
6.1.4.1 Pilot study
6.1.4.2 Main study
6.2 THREATS TO VALIDITY
6.2.1 Conclusion validity
6.2.2 Internal validity
6.2.3 Construct validity
6.2.4 External validity
6.3 ANALYSIS OF RESULTS
6.3.1 Criteria development
6.3.1.1 List of criteria
6.3.2 Overall result of criteria prioritization
6.3.2.1 The respondents criteria group prioritization
6.3.3 Result of the prioritization of the significant respondents
6.3.3.1 Comparison between the significant respondents and all respondents
6.3.3.2 Comparison between the significant project members and all project member
6.3.3.3 Comparison between significant designers and significant testers
6.3.4 Discussion on the criteria prioritization
6.3.5 Comparison of test tools
6.3.5.1 Comparison of test tools that validate models
6.3.5.2 Comparison of test tools that test code
6.3.5.3 Comparison of test tools that execute design models
6.3.5.4 Comparison of test tools that generate test cases from models
6.3.6 Tool comparison between significant and non-significant respondents
6.3.7 Comparison of test tools with future modifications
6.3.8 Discussion
6.3.9 Conclusions from the case study
7 THESIS CONCLUSION
8 FUTURE WORK
9 ABBREVIATIONS
10 REFERENCES
APPENDIX A – JUNIT
APPENDIX B – QUESTIONNAIRE
APPENDIX C – STATISTICS
APPENDIX D – DIAGRAMS
D.1 TEST CASE GENERATION
D.2 INTEGRATION GROUP
D.3 USAGE GROUP
D.4 MODELS GROUP
D.5 MODEL DRIVEN ARCHITECTURE GROUP
D.6 TESTING GROUP
D.7 CRITERIA PRIORITIZATION
APPENDIX E – CHI-SQUARE AND SIGNIFICANCE LEVEL
E.1 CHI-SQUARE AND SIGNIFICANCE LEVEL FOR EACH PAIR
E.2 SIGNIFICANT AND NON SIGNIFICANT PAIRS

Leave a Comment