Unsafe Code Patterns (eval, dangerouslySetInnerHTML)
Auth Safety · AUTH-14 · Priority: P0
Why It Matters
eval() executes arbitrary JavaScript at runtime. dangerouslySetInnerHTML renders raw HTML into the DOM without sanitization. Both patterns create Cross-Site Scripting (XSS) vulnerabilities — allowing attackers to inject and execute malicious code in your users' browsers.
AI code generators use these patterns because they produce working code quickly — eval() for dynamic expressions, dangerouslySetInnerHTML for rendering markdown or rich text. The AI optimizes for "it renders correctly," not "it's safe against injection."
CodeRabbit's analysis of 470 GitHub PRs found AI-generated code has 1.91x more insecure object references than human-written code, with injection patterns being a significant contributor.
Priority: P0 — XSS enables session theft, data exfiltration, and account takeover.
Affected Stack: Next.js, React, any JavaScript framework
The Problem
eval()
// ❌ AI-generated dynamic calculation
const result = eval(userInput); // User controls what gets executed!
If userInput comes from a URL parameter, form field, or database record that a user can influence, the attacker can execute arbitrary code.
dangerouslySetInnerHTML
// ❌ AI-generated markdown renderer
function BlogPost({ content }: { content: string }) {
return <div dangerouslySetInnerHTML={{ __html: content }} />;
}
If content contains <script>alert(document.cookie)</script> or an <img onerror="..."> tag, it executes in the user's browser.
The Fix
Replace eval() with safe alternatives
// ✅ For math expressions — use a safe parser
import { evaluate } from 'mathjs';
const result = evaluate('2 + 3 * 4'); // Safe: only math, no code execution
// ✅ For JSON parsing
const data = JSON.parse(userInput); // Only parses JSON, not executable code
// ✅ For dynamic property access
const value = obj[key]; // Instead of eval(`obj.${key}`)
Sanitize HTML before rendering
// ✅ Sanitize with DOMPurify before dangerouslySetInnerHTML
import DOMPurify from 'dompurify';
function BlogPost({ content }: { content: string }) {
const clean = DOMPurify.sanitize(content);
return <div dangerouslySetInnerHTML={{ __html: clean }} />;
}
// ✅ Even better: use a React markdown library that doesn't use innerHTML
import ReactMarkdown from 'react-markdown';
function BlogPost({ content }: { content: string }) {
return <ReactMarkdown>{content}</ReactMarkdown>;
}
Key rules:
- Never use
eval(),Function(), orsetTimeout(string)with user-controlled input - Always sanitize HTML with DOMPurify before
dangerouslySetInnerHTML - Prefer React markdown renderers over raw HTML injection
- Set
Content-Security-Policyheaders to block inline scripts as defense-in-depth
References
- OWASP: Cross-Site Scripting (XSS)
- MDN: eval() — Never use eval!
- DOMPurify
- CWE-79: Improper Neutralization of Input
Related Checks
- httpOnly Session Cookies — AUTH-07
- CSRF & Mutation Safety — AUTH-22, ADM-19
Is your app safe? Run Free Scan →