from IPython.display import Latex
from IPython.display import Image
from IPython.core.display import HTML
A stakeholder is any group or individual that is affected by or has a stake in the product or project. The key players for a project are called the key stakeholders. One of the key stakeholders for your project is always the customer. The customer can be different depending which level in the systems heirarchy that one is working. For engineers working a few levels down the systems heirarchy, the customer may be the leader of a team which takes that engineer's product and integrates it into the larger system. At the highest level, the customer is the person or organization which is purchasing the product.
Other stakeholders may be more difficult to identify, and could include Congress, advisory planning teams, program managers, mission partners, the media, prime contractors, regulatory agencies, end users, etc. The below table shows some examples for stakeholders in a NASA science mission at various phases in its life cycle. For commercial missions, these stakeholders may be quite different.
Image("stake.png", width=800)
For your class projects, your TA's and I will play the role of the customer and/or principle investigator for each project. The success of a system depends entirely on satisfying stakeholders. It is not about maximizing performance or minimizing cost, it is about satisfying stakeholder needs.
Since the success of your mission depends on satisfying stakeholder needs, it is clearly very important to understand their expectations for the mission. This can be difficult, since the needs and expectations for many stakeholders may be qualitative and fuzzy. For the most part, these needs will be independent of the system itself, they will be some goal that the system that you design will accomplish. Stakeholder expectations are organized into needs, goals, and objectives, each of which is progressively more specific. Needs are defined in the answer to the question “What problem are we trying to solve?” Goals address what must be done to meet the needs; i.e., what the customer wants the system to do. Objectives expand on the goals and provide a means to document specific expectations. (Rationale should be provided where needed to explain why the need, goal, or objective exists, any assumptions made, and any other information useful in understanding or managing the NGO.)
Needs: A single statement that drives everything else. It should relate to the problem that the system is supposed to solve but not be the solution. The need statement is singular. Trying to satisfy more than one need requires a trade between the two, which could easily result in failing to meet at least one, and possibly several, stakeholder expectations.
Example from Landsat: Monitor changes in the Earth's surface.
Goals: An elaboration of the need, which constitutes a specific set of expectations for the system. Goals address the critical issues identified during the problem assessment. Goals need not be in a quantitative or measurable form, but they should allow us to assess whether the system has achieved them.
Example from Landsat Data Continuity Mission: The goal of the LDCM, consistent with U.S. law and government policy, is to continue the acquisition, archival, and distribution of multi-spectral imagery affording global, synoptic, and repetitive coverage of the Earth's land surfaces at a scale where natural and human- induced changes can be detected, differentiated, characterized, and monitored over time. - SMRD
Example from JWST: The primary goal of the JWST is to observe the early universe, at an age between 1 million and a few billion years. - JWST mission requirements doc
Objectives: Specific target levels of outputs the system must achieve. Each objective should relate to a particular goal. Generally, objectives should meet four criteria.
Examples from Landsat Data Continuity Mission (SMRD):
Objectives may be somewhat fuzzy/imprecise. They should specify what the system is supposed to do, without specifying how the system will do it. We derive requirements from these objectives. Requirements are not fuzzy at all. They are unambiguous, concise, measurable, unique, consistent, and isolated.
For your projects, you are given objectives. Your first task, for the SRR, is to derive requirements from these objectives.
Image("tasks.png", width=800)
Requirements definition is an iterative process through which vague stakeholder needs are progressively refined into specific, unambiguous, quantitative requirements. This is often done in parallel with mission concept definition, since the concept and the requirements inform one another. The ambiguity about various concepts is reduced during this process.
After SRR, the requirements are placed into configuration management.
There are high-level (or system-level) requirements, and there are lower level requirements. We typically begin with the high level requirements and use those to inform and derive lower level requirements. There are different types of requirements, and a very specific set of guidelines for properly writing them.
Requirements are how we specify the system that is to be built. For spacecraft, the systems are simply too complex and the cost of design changes is too high to take the engineering approach that you might take for something like commercial product development. It would be too expensive to build, test, iterate, build, test, iterate, etc. for the entire system (though we may do that for components of the system). Instead, we must all agree (engineers and stakeholders) very precisely on the specifications to which the system should be built, and then we build to those specifications.
Requirements specify the system in terms of:
Requirements specify the problem, not the solution, and they form the basis for the system's design, manufacture, verification, and operation.
The graphic below, from the NASA SEH, illustrates the requirements definition process and identifies typical inputs, outputs, and activities to consider in addressing technical requirements definition.
Image("requirements.png", width=800)
The first step in the technical requirements definition process is to establish the top-level requirements. These exist in order to understand the technical problem to be solved, the scope of that problem, and the design boundary. This typically involves the following activities:
These top-level requirements come from stakeholder needs, the concept of operations, regulations, etc. With an overall understanding of the constraints, physical/functional interfaces, and functional/behavioral expectations, the requirements can be further defined by establishing performance and other technical criteria. The expected performance is expressed as a quantitative measure to indicate how well each product function needs to be accomplished.
Functional requirements: Functional requirements define what functions need to be performed to accomplish the objectives.
Performance requirements: Performance requirements define how well the system needs to perform the functions.
Technical requirements come from a number of sources, including functional, performance, interface, environmental, safety, human interfaces, standards, and in support of the "ilities" (reliability, sustainability, producibility, etc.). With the system-level requirements established, we then delegate and allocate requirements to successively lower subsystems. Each of these subsystems will also have functional and performance requirements, and a few other flavors of requirements.
We decompose/refine system requirements to successively lower level subsystems, to components, and to manufacturing processes, materials and tolerances, and integration back to the system. The figure below shows how this flowdown typically looks. This will generally involve the allocation of system budgets (mass, power, volume, $\Delta V$, data rate, reliability, etc.) to various subsystems. Deciding how much of each budget to allocate to each subsystem is an iterative process that can be informed by experience/rules of thumb, formal optimization methods, and guessing/iterating.
Image("flowdown.jpg", width=500)
These requirements come in a variety of flavors, each of which is explained below.
Functional requirements: Functional requirements define what functions need to be performed to accomplish the objectives. These are generally derived from system-level functional requirements.
Performance requirements: Performance requirements define how well the system needs to perform the functions. These are generally derived from system/subsystem level functional requirements.
Interface requirements: Requirements that specify the functional or structural interfaces among subsystems.
Customer requirements: These will include product expectations, mission objectives, operational concerns, and/or measures of effectivity and suitability. It may require careful analysis to extract functions, and success criteria are generally provided.
Design requirements: These are requirements derived from process specifications (e.g. MIL STDs), or internal best practices (tolerances, trade-secret guidelines, design for manufacturability, etc.). These are often associated with "design for X."
Verification requirements: Requirements that specify the way in which verification must proceed—test requirements, analysis methodologies, etc. (We'll go over verification in some detail a bit later).
Any of the above types of requirements could flowdown from a higher-level requirement. In that case, each will also fall into one or both of two categories for flowdown requirements: derived and allocated.
Derived requirements: Any requirements flowed down from a higher level.
Allocated requirements: Any requirement established by dividing or allocating a higher-level requirement into more than one requirement at a lower level.
You'll have noticed that all of the example requirements that we've seen take a very particular form. There's a very specific way to write a valid requirement. A valid requirement is one which is unambiguous, isolated, concise, measurable, unique, and consistent. By following a set of rules, we can make certain that our requirements are valid ones.
1. Preferred verb is "shall."
Anything else ("should," "ought," etc.) implies a soft requirement, to which the system will not be held during verification.
In general, when you're putting together these requirements, you should imagine that you are sitting across from a lawyer. That lawyer is attempting to prove that your system does not meet requirements by exploiting any vague language, inconsistency, unmeasurable guarantee, etc. Write your requirements such that they stand up to this imaginary lawyer's scrutiny.
2. The grammar establishes the flow of the requirement.
A requirement should be a single sentence. The subject is a system, element, subsystem, component, etc., which establishes the functional level at which the requirement is relevant. The verb often implies the type of verification (test, inspect, analyze, etc.). The object of the verb is often a Technical Performance Measure (TPM).
Which of the below is a good example (bold), and which is a bad example?
3. Requirements are unambiguous.
Unambiguous requirements are free of words and phrases such as "reasonable," "acceptable," "minimize," and "where applicable.“ Unambiguous requirements are not a matter of opinion, and cannot be misinterpreted. Quantitative requirements are often unambiguous, but qualitative ones can also be valid. Remember, don't give the imaginary lawyer any room for interpretation.
Which of the below is good (bold), and which is bad?
4. Requirements are isolated.
Each "shall" statement belongs in a separate, unique requirement (i.e., no conjunctions). Constraining each paragraph to contain no more than one "shall" allows one to take full advantage of the viewing, reporting, and traceability functions of requirements-management tools. Isolation allows full traceability, discrete referencing, and one-to-one verification cross referencing.
5. Requirements are measureable.
Each requirement will be verified (by test, analysis, inspection, etc.). If the requirement cannot be verified, it cannot be tested. A measurable requirement is the only type that can be verified. (yes/no is a type of measurement)
6. Requirements are concise.
Don’t include explanations, definitions, or other information unrelated to the specification; use a glossary, a list of acronyms, etc. in the documentation instead.
7. Requirements are unique.
It is easy in long documents created by teams of people to identify the same requirement multiple times in slightly different forms. The work to be done is deciding which version of the requirement to retain and which to delete.
Summarized below, from the NASA SEH.
Image("benefits.png", width=500)
Before any requirement is accepted, it must be validated. This is different from being verified, which will discuss momentarily. Requirements are validated against the stakeholder expectations, the mission objectives and constraints, the concept of operations, and the mission success criteria. Validating requirements can be broken into six steps:
Are the requirements written correctly?: Identify and correct requirements “shall” statement format errors and editorial errors. See above section.
Are the requirements technically correct?: A few trained reviewers from the technical team identify and remove as many technical errors as possible before having all the relevant stakeholders review the requirements. The reviewers should check that the requirement statements (a) have bidirectional traceability to the baselined stakeholder expectations; (b) were formed using valid assumptions; and (c) are essential to and consistent with designing and realizing the appropriate product solution form that will satisfy the applicable product life cycle phase success criteria.
Do the requirements satisfy stakeholders?: All relevant stakeholder groups identify and remove defects.
Are the requirementes feasible?: All requirements should make technical sense and be possible to achieve.
Are the requirements verifiable?: All requirements should be stated in a fashion and with enough information that it will be possible to verify the requirement after the end product is implemented.
Are the requirements redundant or over-specified?: All requirements should be unique (not redundant to other requirements) and necessary to meet the required functions, performance, or behaviors.
Verification, which we'll discus in a moment, answers the question Did we build the system right? Validation, by contrast, answers the question Did we build the right system?
For a complex system, maintaining traceability among all requirements is of critical importance. Typically, one uses a requirements management tool (e.g. DOORS or a SysML tool) to generate: Requirements statements, Requirements traceability (a matrix or tree), Verification cross-reference matrices, Lists of TBRs, TBDs, etc. For each requirement, the metadata shown in the table below is stored.
Image("metadata.png", width=800)
In your SRR, it should be clear which requirements are derived/allocated from other requirements.
A requirement is only as useful as it is verifiable. If we can't prove that our system satisfies a particular requirement, then that requirement is useless. Tests without requirements are wasted effort. Requirements without tests will prevent the development cycle from being complete. Every requirement must be verified. This verification may take the form of:
Test: uses special equipment to measure quantitative characteristics.
Demonstration: special kind of tests that qualitatively demonstrate correct operation of the system without physical measurements (e.g., reliability)
Inspection: e.g., visual examination
Analysis: e.g., theory, simulation
Analogy/similarity: e.g., if flight software is the same as previous mission and context has not changed, some tests can be waived.
As shown in the below diagram, system requirements are allocated to lower level subsystem requirements. Low-level requirements are verified, and then subsystems are verified, and finally the full system is verified to meet system requirements.
Image("v.png", width=800)
With the functional and performance requirements for each subsystem established (the goal of your SRR assignment), we must then design an architecture that meets those requirements (the goal of your SDR assignment). For many of the subsytems, you may have multiple architecture options from which to choose. For attitude control, for example, perhaps you must decide whether you are going to use reaction wheels or CMG's. And perhaps you need to decide if you're going to dump accumulated momentum from those thrusters using torque coils or thrusters. For other subsystems too, you will have multiple options and the "correct" choice may not be clear.
There is a field of study devoted to making these sorts of decisions, and you have many options at your disposal. I am only going to present a single method.
Image("pugh.png", width=800)
Image("pugh2.png", width=800)
You'll note that one of the criteria in the above Pugh matrix example is "risk." This has a formal definition.
Risk: A measure of the probability and severity of adverse effects.
Reliability: The ability of a system or component to perform its required functions under stated conditions for a specified period of time.
Opportunity: A measure of the probability and the benefit of beneficial effects.
When performing a risk analysis, you're asking yourself the following three questions about the alternative under consideration:
The answer to "what are the consequences" place a particular failure into one of two categories: a hard failure or a soft failure. A hard failure results in complete loss in functionality. These sorts of failures are easy to analyze, and rarely encountered in practice. A soft failure, by contrast, results in partial loss of funcitonality and is much more common and difficult to analyze.
"Risk" is some function of probability and consequence. Does it make sense that our contours for equal risk will look something like those shown below?
Image("risk.png", width=500)
Alternatives with low probability and low consequence are low risk. Alternatives with high probability and high consequence are high risk. Furthormore, low consequence alternatives are always low risk, regardless of their probability. High consequence events, by contrast, are low risk if they are low probability, and high risk if they are high probability. High probability events should have a risk directly proportional to their consequence.
This is sufficient information to draw the above, approximate curves. A common tool for evaluating risk is the Stoplight Chart, shown below, which approximates the above set of curves to a discrete number of blocks. Different spotlight charts may use different numbers of blocks, but they represent a quantization of the above continuous risk landscape.
Image("stoplight.png", width=500)
By quantifying the probability of a particular failure, and the consequence of that particular failure (which may not be a straightforward task), we can classify each alternative as high risk, medium risk, or low risk. This is a method for coming up with a rank metric for use in the trade studies described above.
Dr. Peck tells a story about a particularly interesting spacecraft failure, which was caused by the following chain of events:
The manufacturing process failed for the following reasons:
In this particular case, this was a soft failure which resulted in degraded (but not total loss of) functionality. The spacecraft was 8-for-7 thruster redundant, so there was an extra thruster which could pick up the slack for the failed one.
What is the risk of this particular failure? What is the consequence? Where would you place it in the stoplight chart?
These potential faults are organized into fault trees, as shown below. Failures at the system, subsystem, and component levels are linked by logic gates in order to visualize which combinations of hard/soft failures result in subsystem or system failure. This is an SDR-level tool, since it requires an established spacecraft architecture to construct. Though, tools like this may also be used to inform architecture decisions.
Image("tree.png", width=500)