查看单个帖子
旧 2008-01-06, 16:32   #1
TechnicalArticles
游客
 
帖子: n/a
默认 Measuring Productivity and Quality in Model-Based Design

Measuring Productivity and Quality in Model-Based Design


by Arvind Hosagrahara and Paul Smith

Accurate measurements of productivity and quality are essential for balancing workload, creating predictable schedules and budgets, and controlling quality. Traditional software development processes include well-established methods for measuring productivity and quality. These include Lines of Code (LOC). A measure of the size of a software application, LOC is the foundation for productivity measurements (LOC/unit work) and quality measurements, such as defect densities (defects/LOC).
With the introduction of Model-Based Design, organizations require a different measure of the software development process. For example, Model-Based Design enables automatic code generation from graphical models. This means that the average engineer can produce remarkably more LOC per unit time than is possible with hand coding, with virtually no software coding defects. While these productivity gains are grounded in real process improvements, new metrics are required to properly instrument and measure those improvements.
The automatic capture of process metrics in a modern development process increases data accuracy and overall productivity. Practical experience has shown that an organization quickly learns to "manage" metrics captured manually to produce mandated improvements, often without improving the underlying process. Additionally, requiring developers to manually capture a comprehensive set of process metrics can burden and distract them from their primary work. Model-Based Design offers the capability to automatically extract metrics, minimizing cost, time to market, and avoidance of quality-related issues.
This article describes an automatic, noninvasive measurement technique for gathering accurate metrics. We describe specific measurements that should be captured when using Model-Based Design and introduce a tool that can capture these process metrics in the Simulink® and Stateflow® environment. The tool is available for download on MATLAB Central.
Overview of Model-Based Design

At the heart of Model-Based Design are Simulink models, graphical, hierarchical, executable block diagram representations of the physical system, the environment, and algorithm behavior (a control or signal processing and communications application).
The models provide:
  • A behavioral description of the embedded software—the physical objects and environment upon which it acts
  • An executable specification that can be tested using simulation to ensure that it meets all functional requirements
  • A tool that enables the design team to communicate information about the design
  • A specification from which real-time software code can be automatically generated for testing, prototyping and embedded implementation
  • Automatically generated code that can be used by itself or integrated with other code to form the complete embedded application
Model-Based Design enables design engineers to quickly evaluate multiple design options by testing and optimizing their algorithms in the modeling environment before they deploy them as an embedded system, reducing design time and development and implementation costs.
Because the design engineer is working in a simulation environment, design iterations are often much faster and more flexible. Design options are validated and tested by tracking the behavior of the design via simulated tests. Once the system is optimized and performance meets expectations, the model is used to build a hardware-based real-time prototyping system for testing.
When the design meets specifications, the Simulink model can be used to generate production-quality code using Real-time Workshop® and Real-Time Workshop Embedded Coder. This stage often involves a rigorous test process before final system deployment.
The engineering process followed by many modern control system development organizations can be described through the “V” design and development diagram shown in Figure 1.

Figure 1. The design "V" process.
The “V” is a two-dimensional view of a development process in which the x-axis represents time and the y-axis, abstraction. As you move down the left-hand side of the V you add more detail to the design, with implementation at the vertex. As you move up the right-hand side of the V, you reach increasing levels of integration and test, culminating in a working product. Formal process measurements are required at each step of the “V” design cycle.
With Model-Based Design, a project that could take months to complete using a traditional development process iterates through the various phases of the “V” process in hours or days. This increase in efficiency enables the rapid evolution of designs and promotes a spiral development method, or one where rapid iterations occur, zeroing in on the final design solution.
The diagram in Figure 2 shows the major process elements used in Model-Based Design. These elements can be mapped onto the “V,” a waterfall process model [2, 3], or any other development process.

Figure 2. Major elements of Model-Based Design.
Traditional Embedded Software Metrics

Most organizations that work in a C or C++ embedded software development environment use a variety of tools to measure effort and productivity. These include LOC, used with the popular Constructive Cost Model(s) (COCOMO) [4]. Other common metrics include the change in LOC from one design iteration to the next, computational complexity, and time spent per design task. These metrics are usually collected manually or semi-automatically.
An organization that uses a traditional design approach will typically measure the following:
  • System Design
    • Time required to produce each design iteration
    • Total number of design iterations required
    • Number of defects introduced
    • Time required to repair design defects
    • Number of functional requirements
    • Number of functional test scenarios
    • Objective measures of design size and complexity
  • Design Validation
    • Time required to produce each design iteration
    • Number of defects introduced
    • Time required to repair design defects
    • Number of test scenarios simulations run
    • Time required to run each test scenario simulation
    • Total number of design iterations required
  • Embedded Software
    • Number of System Lines of Code (SLOC or LOC)
    • Time required to generate code or implement functionality (Time/LOC)
    • Defects introduced per LOC
    • Time required to repair each software defect
  • Unit Test
    • Number of test cases
    • Time required to prepare each test case
    • Number of defects detected in each test case
    • Time required to run each test case
    • Time required to repair each defect
    • Defect detection and removal efficiency
  • System Test
    • Time required to execute each test case
    • Number of defects detected in each test case
    • Time required to repair each defect
    • Defect detection and removal efficiency
  • Field Deployment
    • Number of defects detected
    • Time or cost to fix each defect
Measurements in Model-Based Design

With Model-Based Design, traditional metrics can result in misleading conclusions. For example, code is generated automatically, producing thousands of LOC in minutes. As a result, using LOC to measure design effort and time invested can produce a misleading result. To measure process efficiencies accurately, metrics systems must emphasize upstream design activities in the modeling environment.
For example, in Model-Based Design, a task such as switching the structure of a controller from a PID loop to a feed-forward network typically takes a few mouse clicks in a model. (In a traditional process, the software designer would need to rework large sections of code, a time-consuming and error-prone operation.) This small change in the modeling environment could result in the modification of several thousand lines of generated code, skewing productivity and quality metrics while failing to capture the actual time or effort invested in making the modification.
For certain processes, such as the Design for Six Sigma (DFSS), performance measurement depends on the ability to quantitatively track and manage projects (both individually and collectively) [5]. When Model-Based Design is employed in a DFSS project to meet productivity and quality objectives, it is essential to accurately measure the actual effort required [6].
Model-Based Design uses the following measurements of process efficiency and productivity:
  • Development time schedules and performance against those schedules
  • Modeling development resource requirements and associated costs
  • Model size, complexity, and functionality
  • Model quality
  • Size of the automatically generated code
  • Simulation speed and overall time spent simulating models
  • Collecting structural coverage metrics directly from the model
Development Time Schedules

Model-Based Design requires specific measurements of each item in a development plan. Project managers set development schedules at the outset of a project and track them using one of several commercially available tools. Once the project is initiated, the developers must link schedule requirements to actual progress. In Model-Based Design, they can do this by linking specific model components to line items in a schedule and maintaining and reporting status in the form of percent complete or some other objective measure. Simulink provides documented and open APIs that enable users to develop interfaces and extensions for tracking these types of items.
Modeling Development Resource Requirements

To predict modeling resource requirements, the project manager must estimate both the size of the model required and modeling productivity. Such estimates depend on a historical account of modeling efforts and rates of development. Project complexity also affects resource requirements and overall productivity. In Model-Based Design, two metrics are used to obtain these measurements:
  • Objective measure of model content—can include model block counts and the numbers of signals, layers in the hierarchy, block parameters, states, masked blocks and mask parameters, state transition diagrams, events, and specific types of blocks. A weighted sum of all these measures is useful in measuring overall content.
  • Time required to produce content—must be estimated or measured by the engineers doing the work.
Model Complexity

The key to making a practical measurement process work is to design a mechanism for computing and measuring complexity and then keep that metric stable throughout the project. It is tempting to continually refine a complexity measurement to get the “right” or “best” metric, but evolution of this measure will result in different measurements for the same model, leading to schedule and resource allocation errors. At the end of each project, it is appropriate to review and refine the measurement system, as long as one can go back into the historical record and make new measurements on previously archived blocks of functionality.
Model Quality

Model quality is measured by defect densities, which must be calculated by the engineers who track design performance and functionality. This can take the form of number of defects detected per unit time or defects per unit size of the model. Automation of this measure enables an organization to implement advanced automated testing procedures.
Code Size

Traditional tools can be used to measure the size of automatically generated code. This metric is important only during the transition to Model-Based Design, to demonstrate the effectiveness of Model-Based Design to product development managers. It is also relevant when operating in a memory-space-constrained target. Modeling effort can be linked to LOC to highlight the productivity gains achieved over traditional hand coding. Code size and modeling effort measurements should be presented together to show productivity metrics.
Simulation Speed and Structural Coverage

To enable accurate objective measurements, simulation speed should be measured and a ratio between model size or complexity (or both) and absolute speed should be reported, together with platform-specific parameters such as CPU type and speed and RAM space.
Additionally, an objective measure of the total time spent simulating models should be captured to ensure thorough model execution. Project managers will soon question why models are being created, if they are not being used to simulate alternative designs or validation and verification scenarios, or for advanced tuning and optimization of controls. If batch simulations are used for validation and verification, this must also be noted, as the engineers participate only in the set-up and post analysis of the results.
The Simulink Accelerator, a Simulink companion product for accelerating and optimizing model performance, can collect performance data while simulating the model. The resulting simulation profile report shows how much time Simulink takes to execute each simulation method, highlighting efforts to optimize model simulation speed.
Structural coverage metrics collected at the model level are an important measure of the quality and completeness of the test cases being developed, and provide ideas for algorithm optimization. Simulink Verification and Validation, another Simulink companion product, provides a measure of cyclomatic complexity, a measure of the structural complexity of the model. It approximates the McCabe complexity measure [2, 7] for code generated from the model.
The final section of this article describes a custom tool that was designed to measure the time and effort spent modeling with Simulink and Stateflow. The tool was built based on an aggregation of customer process measurement requirements, using MathWorks products.


更多...
  回复时引用此帖