The Security Engineer will be part of a team who’s role is to assure enterprise security architecture with a focus on the review and authorship of Architecture Decision Records (ADRs) and Security Architecture Review Board (SARB) submissions. The role blends deep technical acumen with emerging expertise in Generative AI (GenAI) and agentic systems, ensuring secure design, governance, and responsible adoption of intelligent automation within the enterprise.
Key Responsibilities
Review & Advisory
- Lead security reviews of solution and domain architectures, ADRs, and AI-enabled platforms.
- Assess GenAI and agentic solution designs for model security, data protection, prompt integrity, provenance, and safe orchestration of agents.
- Evaluate proposals for alignment with enterprise standards, regulatory expectations, and risk tolerance.
- Produce actionable review comments with traceable recommendations, covering both traditional and AI-driven architectures.
Authoring & Governance
- Author and maintain ADRs, patterns, and reference architectures—including those covering GenAI system integration, LLM usage, and multi-agent frameworks.
- Ensure architectural documentation expresses the problem space, options, controls, and trade-offs clearly and defensibly.
- Promote structured architectural reasoning supported by both human and GenAI-assisted analysis workflows.
GenAI & Agentic Security
- Define and assess controls for GenAI systems, including:
- Model access, data boundary, and prompt injection defenses.
- Guardrails for AI agents performing autonomous actions or multi-step reasoning.
- Secure orchestration, isolation, and human oversight mechanisms.
- Evaluate the security of agent frameworks, LLM pipelines, and model-hosting platforms (e.g., Vertex AI, Azure OpenAI).
- Contribute to enterprise policy for responsible AI use and GenAI-assisted development.
Core Competencies
- Enterprise security architecture (SABSA, TOGAF, NIST CSF).
- GenAI systems architecture, LLM lifecycle, and model governance.
- AI security patterns (threat modeling for LLMs, data leakage prevention, agent control).
- Strong authorship and analytical writing—clear articulation of decisions and consequences.
- Familiarity with tools for architectural diagramming, review automation, and GenAI-assisted design (e.g., LangChain, OpenAI GPT, Guardrails AI).
- Broad experience across cloud, data, application, and API securitydomains.
Qualifications
- Bachelor’s or Master’s in Computer Science, Cybersecurity, or related field.
- 5+ years of experience in security design, including AI-related systems.
- Desirable certifications: CISSP, CCSP, SABSA, TOGAF, or AI-specific credentials (e.g., NIST AI RMF, MIT AI Ethics, Azure AI Engineer).
- Demonstrable experience with secure implementation of GenAI or autonomous agents in enterprise settings.