Episode 21 — Evaluate Security Process Automation Solutions Without Automating Bad Decisions

When people first hear the phrase security process automation, they often imagine a helpful machine that takes messy, slow work and turns it into something fast, neat, and reliable. That picture is partly true, and it is also where many teams get into trouble, because speed is not the same as quality and consistency is not the same as correctness. A beginner-friendly way to think about automation is that it is a multiplier, not a mind, and it multiplies whatever you feed it, including confusion, missing context, and weak judgment. The goal for this lesson is to build a mental habit that says, before we automate, we evaluate, and we evaluate the process as carefully as we evaluate the technology we want to use. If you can learn to spot bad decisions hiding inside normal-looking steps, you will avoid building a very efficient machine that produces the wrong outcomes at scale.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

A useful starting point is to define what a security process actually is, because many people confuse a process with a checklist or a tool. A process is the repeatable way decisions get made and actions get taken, including who is involved, what information they use, what rules guide them, what evidence they produce, and what happens when something goes wrong. Security processes cover things like approving access, responding to incidents, reviewing system changes, managing vulnerabilities, and proving compliance. Each of those has judgment points where a human decides whether something is acceptable, risky, urgent, or safe enough. Automation is the act of turning parts of that process into software-driven steps so the work happens faster, more consistently, or with fewer manual handoffs. The evaluation challenge is to separate steps that are truly mechanical from steps that depend on context, values, or tradeoffs that need careful reasoning.

To evaluate automation options well, you need to understand why the process exists in the first place, because the purpose drives what must not be compromised. Many security processes are designed to reduce uncertainty by forcing information to be collected and reviewed before something changes in a system. Others exist to create accountability, meaning it is always clear who approved what and based on what evidence. Some exist to prevent known failure patterns, such as granting broad permissions because it seems convenient or delaying patching until it becomes an emergency. If you automate a process without understanding its protective purpose, you may remove the friction that was intentionally slowing risky behavior down. Beginners sometimes assume friction is always bad, but in security, the right kind of friction is what keeps the organization from making fast mistakes. Evaluating automation therefore includes asking which parts of the friction are waste, and which parts are guardrails that protect people from their own shortcuts.

A common misconception is that if a process is slow, the best fix is to automate it, but slow processes can be slow for many different reasons. Some are slow because the steps are unclear, so everyone stops to interpret what the step means each time. Some are slow because inputs are missing, so work bounces back and forth as people ask for the same details repeatedly. Some are slow because the process is doing too much, like requiring approvals that do not add real value or requiring evidence that nobody uses. Others are slow because the decisions are high impact, and careful deliberation is appropriate. If you automate a slow process without diagnosing the true cause, you risk automating confusion, automating rework, or automating unnecessary approvals. A better approach is to look at the process as a flow of decisions and information and identify exactly where time is lost and why.

One simple way to detect a bad decision hidden in a process is to look for steps that reward the easiest answer instead of the safest or most accurate answer. For example, imagine a process where a request for access is approved quickly if the requestor says it is urgent, and nobody checks whether the request is truly urgent or whether a narrower permission would work. That is not a process designed to manage risk; it is a process designed to move tickets. If you automate that, you will create a fast path for over-permissioning, and the system will slowly fill with people who have access they no longer need. Another sign is a step that relies on a label rather than evidence, such as trusting a system because someone marked it as compliant without verifying controls. Automation cannot fix a lack of evidence, and it often makes label-based decisions harder to challenge later.

It also helps to distinguish between automation that gathers facts and automation that makes decisions, because those carry very different risk. Gathering facts is often safer to automate, such as collecting logs, checking whether a device has a required configuration, or pulling a list of vulnerabilities from a scanner. Making decisions is riskier, such as automatically approving a change, automatically granting access, or automatically closing an incident because it matches a pattern. Decision automation can still be valid, but it must be based on clear criteria and strong confidence that the criteria represent the organization’s intent. If the criteria are vague, incomplete, or outdated, the automation will lock in the wrong logic. A strong evaluation habit is to ask, for each proposed automated decision, what could go wrong if the decision is wrong, and how quickly would we notice and correct it.

Because security decisions involve tradeoffs, you should learn to evaluate whether the process contains explicit decision criteria or hidden, informal judgment. Explicit criteria are written rules, thresholds, and definitions that different people would apply in similar ways. Hidden judgment is when the rule is not really written down and the decision depends on who happens to be working that day. Hidden judgment is a warning sign for automation, because software needs the criteria made explicit to behave predictably. If you attempt to automate a step that currently depends on informal judgment, you will be forced to encode that judgment, and if you encode it poorly, you will create consistent but incorrect outcomes. The best move is often to stabilize the decision first, by defining what information is required and what standards must be met, then automate only after the rule is understood and agreed upon.

Another practical evaluation method is to look at the quality of inputs and the reliability of data sources, because automation is only as trustworthy as the information it consumes. If a process pulls asset information from an inventory that is incomplete, an automated policy that depends on that inventory will fail in unpredictable ways. If a process relies on people to tag systems correctly, any automation based on tags will reflect human inconsistency and errors. This is not a reason to avoid automation, but it is a reason to build checks and feedback loops that detect bad input early. Good automation designs often include validation steps that prevent the system from acting when data is missing or contradictory. When evaluating solutions, you should ask how the tool handles uncertainty, such as whether it can pause for review, request missing context, or provide a clear explanation of why it made a choice.

A key principle for avoiding automated bad decisions is to keep humans responsible for the riskiest judgment calls, at least until the organization has strong evidence that automation is performing safely. This does not mean humans must do all work manually, because that can be slow and error-prone, but it means automation should support human decision-making rather than replace it by default. For example, automation can prepare a recommended approval with supporting evidence, highlight anomalies, and enforce that the reviewer checks certain facts before approving. Over time, if the team learns that a certain category of requests is consistently low risk and well understood, that category can be moved toward more automation. This is a staged approach, where you start with assistive automation, then gradually increase autonomy only where outcomes are predictable and reversible. Evaluating a solution includes checking whether it supports that staged adoption rather than forcing a one-time leap to fully automated decisions.

It is also important to examine how automation changes incentives and behavior, because the process is not just steps, it is how people react to steps. If automation makes it easier to submit risky requests, people may submit more risky requests. If automation makes approvals faster, reviewers may stop reading carefully because they assume the tool handled it. If automation automatically closes tickets, people may stop reporting small issues because they think nothing will happen. These behavior shifts can create security gaps that did not exist in the manual process. A good evaluation includes thinking about second-order effects, like whether the automation encourages thoughtful inputs, discourages gaming of the process, and keeps accountability clear. You want automation to make good behavior easier and bad behavior harder, not the other way around.

When comparing automation solutions, you should pay attention to transparency, because opaque automation is where bad decisions can hide and persist. Transparency means the tool can explain what it did, what rule it applied, what evidence it used, and what it could not determine. For a beginner, a helpful analogy is a math teacher showing work rather than only giving the answer, because you can inspect the reasoning and spot mistakes. If a tool produces an outcome without explanation, you will struggle to debug failures, and people will either trust it blindly or reject it entirely. A transparent system supports review, audits, and continuous improvement, which are essential in security where conditions change. Evaluating transparency includes looking at logging, decision traces, and whether you can reproduce why a decision occurred after the fact.

A related issue is control and governance, meaning who can change the automation rules and how those changes are reviewed. Automation rules are powerful because they steer decisions across many systems, so rule changes are essentially changes to security policy. If anyone can adjust a rule quickly without review, the organization may accidentally weaken security or introduce inconsistent behavior. If changes are too difficult, the rules may become outdated and keep enforcing yesterday’s assumptions. A balanced approach includes version control for rules, peer review for significant changes, testing in a safe environment, and a clear rollback path when something goes wrong. Evaluating an automation solution should include checking whether it supports safe change management for the automation logic itself, not just for the systems the automation touches.

Finally, you should build the habit of evaluating automation by imagining failure modes and designing guardrails, because security engineering assumes things will fail eventually. Ask what happens if the automation is wrong, unavailable, or compromised, and whether the process can continue safely. Ask whether the automation can be bypassed and under what conditions, because sometimes bypass is needed for emergencies, but bypass is also where abuse happens. Ask whether the automation creates a single point of failure by concentrating decision-making in one system that everyone depends on. Strong solutions provide graceful degradation, meaning they fail in a safe way, such as requiring manual review rather than silently approving. When you can describe failure modes and safeguards clearly, you are demonstrating the kind of disciplined thinking this certification expects.

In the end, the real skill is not choosing an automation tool, but learning how to protect decision quality while improving speed and consistency. Automation works best when it is built on a process that has clear intent, reliable inputs, explicit criteria, and accountability that survives scrutiny. It fails when it hides judgment inside vague rules, trusts weak data, or removes friction that was preventing risky shortcuts. As you move forward, keep the mindset that automation should make the right decision easier to reach and easier to defend, not merely faster to execute. If you can evaluate security process automation through that lens, you will help organizations avoid the painful lesson of discovering that they automated the very mistakes they were trying to prevent.

Episode 21 — Evaluate Security Process Automation Solutions Without Automating Bad Decisions
Broadcast by