Flag job

Report

Offensive AI Security Engineer – Red Team

Salary

$154k - $211.75k

Min Experience

0 years

Location

Newark, CA

JobType

full-time

About the role

We are seeking an Offensive AI Security Engineer to join our AI Red Team within the Security Engineering team. This role focuses on adversarial machine learning (ML), AI-driven offensive security, and red teaming AI systems to uncover vulnerabilities in AI-powered automotive security models and vehicle platforms. As part of Lucid's Offensive AI Security team, you will attack, manipulate, and exploit AI/ML models to identify real-world threats and weaknesses in AI-driven security solutions. You will develop AI-enhanced security automation tools, perform LLM-based penetration testing, and integrate AI/ML attack techniques into offensive security operations. Key Responsibilities: AI Red Teaming & Adversarial Attack Development Design and execute adversarial attacks on AI-powered security systems. Conduct LLM-based penetration testing to uncover AI security flaws in vehicle cybersecurity applications. Identify attack surfaces in AI-driven perception systems (LiDAR, Radar, Cameras) and develop exploits against automotive AI models. AI-Driven Offensive Security Automation Build tools and systems to analyze emerging AI threats in vehicle security and enterprise AI models. Develop AI-assisted security automation tools for: Reconnaissance – Automate vulnerability discovery using LLMs and RAG-based intelligence gathering. Exploitation – Use AI to generate attack payloads and automate offensive security operations. Fuzzing – Enhance automated fuzz testing with AI-driven input mutation strategies. Reverse Engineering – Apply LLM-assisted binary analysis for rapid security assessments. Build offensive and defensive AI security tools, integrating ML-driven automation into security assessments and exploit development. ML-Driven Security Research & Exploitation Use ML to characterize a program, identifying security-critical functions and behavioral anomalies. Understand LLVM Intermediate Representation (LLVM IR) to analyze compiled AI/ML software for security weaknesses. Develop AI-driven techniques for: Threat Detection – Use ML to automate malware detection and anomaly recognition. Cryptographic Algorithm Classification – Identify cryptographic weaknesses in compiled binaries. Function Recognition – Use AI models to automate binary function analysis and decompilation. Vulnerability Discovery – Automate zero-day discovery using ML-based exploit prediction models. Evaluate ML security models for robustness, performance, and adversarial resilience. Offensive AI Research & Red Teaming Strategy Research novel AI attack techniques and evaluate their impact on vehicle cybersecurity and enterprise AI security models. Collaborate with internal Red Teams, SOC analysts, and AI security researchers to refine AI-driven offensive security approaches. Stay ahead of emerging AI threats, tracking advancements in AI security, adversarial ML, and autonomous vehicle AI exploitation.

About the company

At Lucid, we set out to introduce the most captivating, luxury electric vehicles that elevate the human experience and transcend the perceived limitations of space, performance, and intelligence. Vehicles that are intuitive, liberating, and designed for the future of mobility. We plan to lead in this new era of luxury electric by returning to the fundamentals of great design – where every decision we make is in service of the individual and environment. Because when you are no longer bound by convention, you are free to define your own experience. Come work alongside some of the most accomplished minds in the industry. Beyond providing competitive salaries, we're providing a community for innovators who want to make an immediate and significant impact. If you are driven to create a better, more sustainable future, then this is the right place for you.

Skills

AI/ML exploitation
adversarial ML
AI-driven pentesting
LLM-based vulnerability analysis
deep learning systems
computer vision AI
prompt engineering attacks
model evasion techniques
penetration testing
AI fuzzing
red teaming AI-driven security applications
attacking AI models