EmilyAI can
Summarise alerts, extract indicators, perform TI lookups, identify historical patterns, draft case notes, highlight anomalies, and reduce noise.
Our commitment to safe, transparent, and responsible AI-assisted security operations.
AI can enhance cyber defence, but only when applied with discipline, transparency, and human oversight. At Cyber Defence, we use artificial intelligence to support analysts, not replace them. EmilyAI — our internal SOC assistant — accelerates triage, enrichment, correlation, and documentation, while all critical judgement and decision-making remain firmly in human hands.
Our ethical AI framework is grounded in principles inspired by Alan Turing’s early guidance on computational reasoning, blended with modern standards of safety, privacy, auditability, and operational accountability.
We follow a strict set of principles to ensure AI is used responsibly and safely within Cyber Defence:
Human-led, AI-supported — Analysts remain fully responsible for investigative decisions, containment actions, and escalation. EmilyAI provides context, not conclusions.
Privacy and data minimisation — EmilyAI only processes the minimum metadata required to support triage and correlation. No raw client data is exposed to AI workflows.
Transparency of capability — EmilyAI’s functions are clearly scoped. She does not predict outcomes, make security decisions, or take actions on client systems.
Auditability — All EmilyAI interactions are logged, reviewed, and attributable.
Security by design — EmilyAI runs entirely within Cyber Defence’s secure internal environment with strict access controls, never as a public tool.
Continuous oversight — Updates are reviewed for correctness, safety, and operational value before deployment.
Scope
Clarity of scope is central to our ethical use of AI. EmilyAI is an assistant — not an autonomous actor.
Summarise alerts, extract indicators, perform TI lookups, identify historical patterns, draft case notes, highlight anomalies, and reduce noise.
Take containment actions, modify client systems, create or deploy detections, change configurations, suppress alerts, or operate without analyst supervision.
Detection engineering teams by surfacing repeated patterns or false positives, helping prioritise tuning work.
Accesses raw data, customer documents, credentials, or sensitive material. She only analyses structured metadata intentionally provided by analysts.
Under strict RBAC, network segmentation, monitoring, and internal audit — just like any other critical SOC system.
A tool for augmenting human judgement, never a substitute for qualified SOC analysts or incident responders.
Alan Turing proposed that computational systems should be evaluated not only by the tasks they perform, but by the reliability, predictability, and controllability of their behaviour. We apply these principles directly to EmilyAI:
Reliability — EmilyAI is expected to behave consistently, repeat investigative steps accurately, and provide stable outputs for analysts.
Predictability — No hidden logic. EmilyAI’s functions are known, documented, and deliberately narrow in scope.
Controllability — Analysts determine when EmilyAI is used and validate all AI-supported suggestions.
Safety — AI must not create new attack paths, operational risks, or ambiguity in responsibility.
This approach ensures that AI remains a safe, predictable, and beneficial contributor to SOC operations — never an unpredictable or autonomous actor.
Safeguards
We apply multiple safeguards to ensure AI is used responsibly and safely across security operations.
Every EmilyAI-supported investigation step must be initiated and validated by a human analyst.
EmilyAI runs entirely on Cyber Defence infrastructure, never transmitting data externally.
EmilyAI only receives structured metadata necessary for context and correlation.
All EmilyAI updates follow strict review, testing, and approval workflows.
All AI interactions, prompts, and outputs are logged for assurance and forensic traceability.
Only authorised SOC staff may access EmilyAI functions, and permissions are continually reviewed.
Responsible AI requires recognising and mitigating risks proactively. Cyber Defence monitors:
• Potential model drift or unexpected responses
• Incorrect or misleading summarisation
• Over-reliance by inexperienced analysts
• Emerging regulatory or ethical requirements
Mitigations include continuous training, oversight, model validation, and periodic manual review of AI-supported investigations.
Our clients operate in sectors where confidentiality, safety, and trust are critical. Misuse or misconfiguration of AI could have regulatory, operational, or reputational impact. EmilyAI is deliberately designed to deliver value without introducing risk.
By keeping AI responsibly constrained and human-directed, we ensure that AI becomes a force multiplier — not a liability.
To understand how AI enhances SOC365 without compromising safety, integrity, or control, visit our dedicated page about EmilyAI.