Model Based Product Support (MBPS) for Department of Defense Weapon Systems

Background

For eight years, we worked with the Maritime Electronic Warfare Systems Division (MEWSD) at NSWC Crane to design and implement  the processes, data, and  automation required to perform Total Life Cycle Systems Management (TLCSM) for weapon systems in a performance-based, data-driven manner. We followed model-based system engineering to model over 400 acquisition and sustainment processes and captured data using the Department of Defense Application Framework (DoDAF) Model/SysML and IBM’s Rational Rhapsody modeling tool. The sustainable weapon system model defines acquisition and sustainment workflows, identifies the required technical and logistics data required by the  workflows, and defines the requirements for automated tools to implement the processes. 


We defined and documented all functional, performance, and interface requirements for a sustainable weapon system. We conducted hundreds of information gathering sessions with acquisition and sustainment personnel to capture business processes and document system requirements. To gain further insight, we participated in various working groups and meetings to understand the data needed to monitor and analyze weapon system test events, sustainment support, and operational readiness. The goal of our model based product support approach was to capture all these data, correlate them in one consistent model and generate products that assist the development and sustainment of the target system.

image146

Why a Model Based Approach?

  

We were drawn to model based product support for a number of reasons.


We wanted to make informed decision based on data. We can make changes to the way we support a system during sustainment. For example, we can change maintenance procedures, training, spares, staffing, tools, suppliers, packaging or redesign parts. We need to understand the impact of these changes, which requires data and tools to evaluate. For example:

  • What is the effect on availability of adding more spares? What is the cost?
  • What is the effect on performance if we replace a part with a similar but different part?
  • How does a component change effect maintenance cost? Supply cost? Training?


We wanted to be able to discover problems we can fix. Our ultimate goal is to maximize operational availability (Ao, the percent time system is up) and minimize cost. We needed data to predict, or compute actual Ao and cost. We needed analyses tools that help us isolate factors impacting Ao and cost and help us evaluate options for addressing them.


We wanted everyone to draw from the same source of data. In the past there has been conflicting information in support products because authors used different sources of information. We want all support products to be generated from a single consistent source of data.


We wanted to automate the generation of support products. Often support products like technical manuals, allowance lists, installation drawings were generated from scratch ignoring very similar work performed during development. We hoped to save cost by automating their generation from model data.

We wanted to be able to evaluate sustainment goals like availability, reliability, and cost at any time in the acquisition life cycle. At design reviews we wanted to verify we were on track to meet these goals. During sustainment we wanted to compute actual values across the fleet. This all requires data.

image147

Benefits of a Model Based Approach

Creating a trustworthy model of correlated data sets requires effort and dedication. However, the resulting model satisfies our objectives better than traditional, poorly correlated, paper end-products.

The model allows us to generate new and useful analyses across the multiple data sets.


We can predict and evaluate sustainment goals throughout the system life cycle. For example, using our authoritative data for maintenance processes and system design we can model our maintenance strategy and predict Ao at any point in the development. We can then use the same data to predict maintenance cost.


We can discover problems hindering our sustainment goals. Using captured operational data we can evaluate the actual performance of our processes and systems in the field. When they are lacking we can evaluate changes, by proposing changes to our data (e.g., number/locations of spares, use of more reliable parts) and re-analyzing the model.

image148

An Overview of Our Approach to MBPS

Model Data: Processes

  

We determined the types of data we needed by first modeling our processes. The DoD provides ample guidance for the acquisition of defense systems. From that guidance we modeled the processes required to perform during sustainment and the pre-sustainment processes that generated the data we needed. In those models, we were careful to identify the data that were inputs and outputs to the processes.


We cataloged our processes using Department of the Navy Enterprise Architecture (DoN EA). DoN EA is a hierarchy of the capabilities implemented by the Navy. We created a subset of that hierarchy that included the capabilities for which we defined processes.

  

We make a point to name our model a sustainable weapon system. We are not just developing a weapon system that meets its performance requirements, we are also developing the system of computers, data and people that will allow it to be sustained after delivery (Force Support, Logistics, Corporate Management and Support). All contribute to meeting our supportability goals.


At the leaves of the hierarchy we defined activities – tasks we perform to implement the capabilities. Each activity is assigned a metric that will measure how well that activity is performed. All metrics derive from our primary goals of Ao or cost.


We define our processes by creating a set of activities diagrams for each activity. We identify the data required by or generated by the activity (process). Sub-activities can have their own activity diagrams creating a hierarchy of activities. Swim lanes identify the person or organization that performs the sub-activities.

  

These processes are part of our MBPS model. We identify the model data we need to implement the processes. These include data for architecture descriptions, requirements, analyses, design, verification, configuration control, manufacturing, installation and operation.

Model Architecture

  

The DoDAF defines a set of information that, by statute, must be captured and modeled early in an acquisition of a weapon system (before issuing a request for proposal). This information includes required capabilities, activities, exchanged data, metrics, a top-level system design and standards.

DoDAF data are correlated using “DoDAF-described models”. DoDAF is a good start to a life cycle management data model, because it correlates a diverse collection of data. As more types of data are generated later in acquisition we strive to correlate them into our model.

Model Data: Requirements

DoD acquisitions are pretty good at modeling requirements. Today, requirements are typically stored in a requirements modeling tool. The tool maintains derivation relationships between requirements. It also typically allocates requirements to the system components that satisfy them and to the test cases that verify them.


We have found other correlations useful. During sustainment, we often need to know the operational impact of a failure, or the operational impact of substituting one manufacturer’s part with another’s. For example, do we still meet our required detection range? We needed a correlation between a requirement (detection range) and the engineering analysis used to select the component.


To address this, for example, we modeled the relationship between detection range and the receiver sensitivity of our system. It links to the analysis performed, in another tool, to derive that required sensitivity. Later in our development, we modeled a further decomposition of sensitivity to the required gain of the antenna and receiver components and link the supporting analysis. Using this chain of correlations we can use the analyses to map the impact of a failure or part change to the its impact on sensitivity and then to impact on detection range and then to impact on mission.


Model Data: Analyses

Requirements derivation is only one example of analyses that we would like to preserve in our model for sustainment.


Early in the development of a DoD weapon system experts in military operations perform mission analyses. They identify capability gaps in the DoD’s ability to perform its duties which could be filled by developing a new system They analyze mission procedures, called Concept of Operations (CONOPS), to address the gap. They perform mission analyses with names like Detect-To-Engage Timeline and Probability-to-Engage to identify top-level requirements for the new capability. They determine the sustainment goals of the new capability (Ao, reliability) by assessing the required Ao and reliability of the entire mission. We capture in our model mission analyses created during development to help us determine the mission impact of system failures during sustainment.


Once it is determined that a new system is needed to fill the capability gap, system analyses are used to derive system requirements from mission requirements. Mission and system analyses are often abandoned after development. We capture mission analyses and system analyses in our model so that we can determine the mission impact of system failures or redesigns. For example, we can report to the crew the new lower detection range of a sensor that resulted from a failure.


There are a number of supportability analyses performed on a system during its development. A few are Failure Mode Effects and Criticality Analysis (FMECA), Level of Repair Analyses (LORA), Fault-Tree Analysis (FTA) and Maintenance Task Analyses (MTA). These analyses help us design a system that is sustainable and help us design the infrastructure that will be used to support it. Although these analyses are performed using the requirements and design of the system, they are rarely linked to the models than contain these data. Supportability analyses are typically performed in a different tool than the one that contains the system design. We strive to link these tools so that changes in the system design are immediately reflected in all analyses including supportability analyses.


Trade Studies are a form of analysis used throughout development. Early, they are used to compare potential solutions to a gap, some of which may not even involve developing a new system. Later trade studies compare alternative functional design and physical designs. A carefully prepared life cycle management model can provide data for these trade analyses that is trustworthy and consistent with all other activities using the model.


We want to optimize Ao and cost, specifically total ownership cost (TOC). It follows that we need a way to determine Ao and TOC. Ao and cost, or Affordable System Operational Effectiveness (ASOE), involves more than just analyzing the weapon system as shown in Figure 8 <<citation>>. Development of the weapon system effects Design Effectiveness. Our life cycle management model contains our processes and infrastructure for supporting the weapon system because Process Efficiency also contributes to ASOE. Our life cycle management model must contain enough developmental data that we can predict the ASOE of the proposed system during development. Our model must contains enough operational data (like down time and spares consumption) to compute actual ASOE during operation.

Model Data: Design

 In a typical DoD weapon system development the design process is divided into functional design, preliminary [physical] design and detailed [physical] design. It is important to capture these designs, and the analyses that justify them, for use during sustainment. DoD systems are often fielded for decades. Re-designs to insert new technology or replace obsolete parts are not unusual. Retaining designs and the analyses that explain them are critical to understanding the impacts of any change we make.

Designs become the authoritative source for the functional and physical structure of the system. All other data sets that rely on that structure (e.g., supportability analyses, integration and testing, manufacturing, installation, configuration management (CM), training, sparing) should be derived from a single consistent design.


After preliminary design, detailed designs are prepared over a number of disciplines (e.g., mechanical, electrical, software, human-machine interface). These should be derived from a single preliminary design.

Ideally tools that required design information should not keep their own copy of the design information, but pull the data from the authoritative tool. Also, ideally any change to the design should be instantly reflected in the downstream tools. For instance, replacing a part in the design would automatically update the reliability block diagrams and automatically compute a new system reliability.


We have found that is not always practical. Sometimes one tool needs to import a data set (e.g. a design) from another tool before an analyst can work on it, creating two copies of the data. Some thoughtful effort is required to mitigate the risk of the downstream database becoming out-of-date. One strategy is carefully version all data sets, perhaps in the CM system. When one data set relies on others, its display products all print the version of the source data used. Automated integrity checks could be developed that search for products developed from out-of-date data sets and flag warnings.


During development we currently rely on our technical reviewers to detect data set inconsistencies in contractor designs. However, once we take over the data set after development is complete we use our CM system to carefully version all data sets.

Tools

No one tool models all life cycle management data. Our strategy has been to store each type of data in a tool specifically designed for that type. That tool becomes the single authoritative source for that data. We religiously avoid duplicated data sets because they inevitably become different. We create linkages between the tools to correlate the data.


We store requirements in a requirements management tool (e.g. DOORS). Architecture descriptions, processes, functional design and preliminary design are modeled in a system architecture tool (e.g., Rhapsody, MagicDraw). Mission, mathematical and engineering analyses are modeled using a wide variety of tools. There are custom DoD tools for mission analyses and commercial tools for math and engineering analyses (e.g. Matlab, Simulink). Our system architecture tools allow us to reference these analyses and link them to requirement derivations, or design decisions. The analyses tools can be invoked directly from the system architecture tool. For example, double clicking on the <<rationale>> box in a part of our model invokes Matlab loaded with the referenced analyses. Supportability analyses are modeled in supportability analyses tools (e.g. Windchill Quality Solutions and OPUS Suite). The authoritative source for functional and preliminary designs is the system architecture tool. The authoritative source for detailed physical design varies by engineering discipline, i.e. there are separate tools to model electrical, mechanical, human-machine interface and software detailed designs. The supportability tool imports this design information from these authoritative sources.


A technical data package contains an number of post-design products necessary to manufacture and install the system, such as bills of material and drawings. We strive to automatically generate this information from model data as much as we can. For example, generating bills of material and manufacturing drawings from our solid models.


Equipment configurations and engineering change history are stored in a configuration management (CM) tool. The initial “as-designed” configuration is imported from the design tools. However the configuration status accounting of the manufactured “as-built” and fielded “as-maintained” configurations are managed by the CM tool. We use our CM tool to version control all model data associated with a particular configuration, except operational data. So for a particular version of a system we can reproduce the applicable requirements, processes, architecture description, design and analyses, as well as the traditionally stored technical data package.


Operational data collected from fielded systems such as failure reports, down time, maintenance actions performed, readiness assessments performed and built-in measurements are stored in custom log files. We intend to standardize these log files across weapon systems so that one set of analyses can be developed for them. 

Model Linkages

To achieve our objectives, the diverse data sets in our model must be linked. To understand the impact of a design change, we need to be able to follow links from the design back to requirements and analyses. To achieve data consistency, we need all users of a particular type of data to link to a single authoritative source.


We find our architecture modeling tool can establish many of these linkages. The analysis (from an analyses tool) that helped derive a requirement (from the requirements modeling tool) is linked to that derivation step (in the architecture tool). In this case we didn’t have to import the requirements into the architecture tool. The architecture modeling tool could reference requirements in the requirements modeling tool. However, if the analysis changes updating a related requirement required a manual data entry.


One solution to this manual entry problem is the use of SysML parametric diagrams. For example, within our model, the “One Way Radar Range Equation” represents a computation performed in a separate equation solving tool. This diagram shows how the characteristics of a missile and various EW systems (the SLQ’s) on the left are used to derive a sensitivity requirement on the right. This provides a link between a requirement (sensitivity) and the analysis the derived it (One Way Radar Range Equation).

  

Sometimes one tool analyzes data from another tool – the authoritative source. This requires a transfer of the data between tools. Often this is accomplished by exporting the source data in a well-known format, like an Excel spreadsheet or comma separated values (CSV) file, and then importing the data into the second tool. This creates a second copy of the data, so care must be taken to keep the copy aligned with the source.


The following are some linkages between data sets that we found useful:

  • Capability to activities that implement it
  • Activity to person/organization that performs it
  • Activity to the manpower required to perform it
  • Requirement to requirement from which it was derived
  • Requirement to analysis that derived it
  • Requirement to test case that verify it
  • Requirement to function that performs it
  • Function to physical component that implements it
  • Design decision to trade analysis that justified it
  • Failure to requirements lost


We recommend keeping the number of linkages small and the their complexity simple. We have seen models with many complex linkages and found them difficult to understand and impossible to maintain. 

Generated Products

Many of the products generated during development and operation of a system can automatically be generated from the model data.


During development, requirements specifications can be generated from the requirements modeling tool, including required traceability matrices. Plans, such as System Engineering, Test and Evaluation, Configuration Management and Life Cycle Management plans can be generated from the process models. Design documents can be generated from the design model data. Manufacturing/installation diagrams and bills of material can be generated from solid models. Analyses can contribute to trade studies.


During sustainment, technical documentation can be generated. We currently generate an interactive electronic technical manual (IETM) from logistics product data (LPD) and solid models. The LPD contains the maintenance procedures. We correlate each maintenance step in the procedures with a view of the solid models, highlighting the parts that need to be manipulated.


We plan to generate provisioning information like the allowance parts lists, failure detection data for built-in test software and specialized reliability block diagrams for computing system readiness. We plan to generate maintainer training from supportability analyses like maintenance task analysis (MTA) and operator training and manuals from operator task analyses.

MBPS Functions We've Prototyped

image149

Maintenance Procedure Viewer

Automatically generates maintenance procedures from LPD and solid models. Displays maintenance procedures in an easy to follow manner that interacts with the three-dimensional solid models.

LPD Evaluator

Verifies logistics data is sufficient for sustainment. 

Solid Model Evaluator

Verifies three-dimensional computer aided design solid models are sufficient for sustainment.

Fault Detection Fault Isolation Table Generator

Generates built-in test parameters from FMECA.

Maintenance Resource Generator

Capture resources (parts, tools, personnel, skills, test equipment) required to perform weapon system maintenance.

Maintenance Procedure Scriptor

Develops maintenance procedures by combining narratives in LPD with solid model visualizations.

Technical Bulletin Editor

Automatically generates and posts maintenance bulletins to technicians across the fleet.

Readiness Assessment Data Generator

Generates and disseminates data required to compute weapon system readiness for its intended missions.

Logger

Captures operational data aboard ship and sends it back to shore.

Operational LPD Calculator

Computes actual component reliabilities, time-to-repair and other measures of effectiveness from operational data.

Configuration Reporter

Captures and reports as-maintained configurations to shore.

Fleet Feedback Recorder

Captures feedback from shipboard maintainers and displays it to analysts on shore. Captures analyst responses and displays it for maintainers.

OBRP Request Generator

Automatically submits changes to the OBRP allocated to each ship.