Malicious alteration is problematic because of its effect on the internal assumptions of the systems. For example, assumptions that hardware components execute commands properly and operating environment access controls in fact control access are fundamental to protective approaches that depend on them. Under malicious alteration, these assumptions are violated and everything that depends on them is potentially invalid. This is differentiated from supply chain issues in that alteration after acceptance is different from alteration in the supply chain because it may involve an internal subversion or a subversion during execution.
Ignore detection of malicious alteration and
wait till consequences reveal attacks.
This is the common approach today. In essence the system is assumed to operate properly after initial testing unless and until it appears to do the wrong thing from a standpoint of an operator or an externally observed event (e.g., something blows up). Testing tends to be limited to test conditions based on the model of how the system is supposed to work and not based on arbitrary malicious alteration. Ignoring detection of alteration and waiting till an alteration is obvious from its consequences has two major problems;
Use antivirus, anti-malware, and similar methods. In low surety environments, antivirus, anti-malware, and similar methods may be used to detect widely known subversions. However, this is of little or no use against most internal malicious alteration.
Examine log files for indicators of changes. This tends to be a laborious process with substantial false positives and negatives, specifically because log files are not normally designed to collect or reveal this sort of information. However, on some systems with strong audit capabilities (e.g., file writes from programs, users, at times) there are useful indicators.
Check file and record dates, times, and system information for changes. While malicious actors can relatively easily avoid such detection in many systems, they don't always do so, and in combination with inconsistency detection, such alterations may be very hard to carry out undetected.
Detect changes with cryptographic checksums and/or integrity shells. These methods are very good at detecting changes (malicious or not), but may be subverted in untrustworthy systems and investigation will be required to determine legitimacy of changes unless strong change controls are also in place.
Use redundancy and consistency checking to detect subversions. This involves looking for hard-to-forge sets of independent indicators that should be consistent unless the system has been subverted.
Use Trusted Platform Modules to detect changes in hardware and software. This method is a hardware version of integrity shells, cryptographic checksums, and related methods applied in hardware at system startup. As such they avoid many of the problems with untrusted systems, but are rarely applied at adequate depth to be effective against insider abuse.