chat
expand_more

How Abnormal Uses Cursor in an Enterprise Codebase

Learn how Abnormal has employed Cursor to optimize our enterprise codebase for LLMs, automate project rules, and build a security-first AI dev culture.
April 7, 2025

As AI tools continue to evolve, few have impacted our engineering workflows as deeply as Cursor. Cursor is a Visual Studio Code-based IDE powered by large language models (LLMs), capable of suggesting, writing, and refactoring code using natural language instructions. When paired with the right structure and safeguards, it helps us build faster, onboard quicker, and scale more securely.

But the real significance of AI coding isn’t just in productivity gains. It’s in the context of what we’re building, and who we’re building it against.

Abnormal’s mission is to stop the most advanced cyberattacks–many of which are increasingly built by adversaries using AI themselves. It’s AI vs. AI: a dynamic game, where staying ahead means using generative tools not only for security detection but for building the underlying systems that power those detections.

Let’s take a brief look into how we’ve made Cursor work inside a large, complex, and security-critical codebase, and how we’re designing our systems and culture to make AI development safe, effective, and scalable.

Why We Use Cursor

The attack surface is evolving faster than ever. Threat actors are using LLMs to write more targeted payloads, automate phishing infrastructure, and generate novel bypass techniques in minutes—not months.

Cursor1

To keep up, our engineering teams need to:

  • Ship detection logic and infrastructure updates quickly.

  • Maintain an extremely high bar for security and reliability.

  • Enable engineers, new or tenured, to make meaningful contributions fast.

AI-based coding helps us meet those needs. When it works well, it functions like a productivity amplifier: refactoring services, scaffolding internal tools, generating tests, and suggesting consistent, well-structured implementations based on past code.

But as anyone using AI to write “real” code knows, you can’t “vibe” your way to a secure and complex product or feature. Getting Cursor to work in these environments is non-trivial and this is where we’ve invested in our internal tooling, architecture, and culture to make it work.

Common Pitfalls in AI Coding (and How We Avoid Them)

Across the industry, teams often run into the same problems when trying to use LLMs to write or modify production code:

  • Large “slop” PRs that are difficult to review

  • Limited ability to handle large-scale, multi-file changes

  • Insecure code and poor security practices

  • Degraded performance in large, complex codebases

These aren’t surprises. Most enterprise codebases weren’t built with AI in mind, and today’s models still are not intelligent enough to work with deeply nuanced business logic or complex architectures they haven’t seen before.

We’ve been able to mitigate many of these pitfalls by making two key investments: 1) an automated prompting system (via LLM-generated Cursor Rules), and 2) a shift in how we structure code and workflows to be more AI-friendly.

How We Made Cursor Work

Automated Cursor Rules

One of the largest causes of poor AI coding performance is context. To mitigate this, Cursor offers a project rules system to selectively provide additional documentation as the agent is developing in your codebase. While useful for configuring smaller codebases, it was infeasible to ask engineers to write high-quality rules for the thousands of files in our monorepo codebase and maintain these markdown rules alongside existing documentation. Instead, we built an LLM system to automatically take in existing documentation and related source code and generate well-formatted project rules automatically.

The system:

  1. Reads through existing code comments and engineer-written markdown files to determine useful cross-codebase context that could be useful for the agent.

  2. Uses the Azure OpenAI API to convert this documentation into high-quality Cursor project rules. Our prompt contains heuristics for how the codebase is organized and detailed instructions for how to make the most of the Cursor rule syntax.

  3. Is then fetched by Cursor automatically as engineers are working on relevant changes.

These rules automatically:

  • Teach Cursor to use our preferred APIs and abstractions

  • Help generate secure, testable, reviewable code by default

  • Reduce the need for engineers to include detailed instruction prompts with the agent

Cursor2

Meeting AI in the Middle

Just as important as teaching Cursor is writing code that’s easier for AI to understand. To make our codebase more AI-friendly, we:

This means:

  • Write modular, self-contained components with clear input/output boundaries

  • Favor typed code and verbose, unambiguous naming

  • Avoid custom or overly abstracted internal terminology

  • Reduce file size and dependency sprawl to improve LLM context performance

  • Embed more docstrings, comments, and examples directly into the code

While most of these are generally good engineering practices, they become critical when trying to reduce complex pull requests into just a few prompts. In short: the fewer assumptions the model has to make, the better it performs. And the better it performs, the faster we can develop secure products and features.

Security-First AI Development at Scale

We work in a space where every line of code matters. That means AI-assisted development needs to be not just fast, but safe.

Here’s how we maintain that balance:

  • Every AI-generated PR is reviewed by a human engineer. No exceptions.

  • Other tools like GitHub code scanning, Copilot, and CodeRabbit AI provide an automated additional layer of review.

  • Overly large or ambiguous AI-generated diffs are rejected to maintain reviewability.

  • Standard Abnormal-specific libraries include built-in auth and observability, so engineers (and Cursor) don’t reimplement critical logic manually.

  • AI productivity gains are used to build internal audit and security tools that wouldn’t otherwise be feasible.

All of this allows us to move quickly while keeping a strict security posture.

Scaling Secure Products with AI

Some of our fastest and upcoming launches this year, detection improvements, and exciting new products, might not have been possible at the same pace without tools like Cursor.

These tools don’t replace engineers. But they do change what engineering looks like. Our teams now spend more time designing interfaces, structuring abstractions, and reviewing high-leverage code, and less time on boilerplate or slow, manual refactors.

This is just the beginning of how Abnormal is using AI to build AI at scale and securely to fight AI-powered threats. We’ll continue to share what we’re learning in future posts in this series.

If these kinds of challenges interest you—whether it’s architecting secure AI systems, building internal platforms for speed, or designing the future of software development—we’re hiring.

See Abnormal’s AI capabilities in action by scheduling a demo today!

How Abnormal Uses Cursor in an Enterprise Codebase

See Abnormal in Action

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Discover How It All Works

See How Abnormal AI Protects Humans

Related Posts

B Proofpoint Customer Story 16
With Abnormal’s behavioral AI, a top healthcare solutions provider addressed gaps left by Proofpoint, automated workflows, and saved 335 SOC hours monthly.
Read More
B Phishing Australia
Attackers rely on the trust currency of corporate email to launch highly personalised phishing attacks. Luckily, a revolution in email security means humans are no longer the last line of defence.
Read More
B Exploiting Trusted AI Tools Blog
Malicious AI is rewriting the rules of cybercrime. Learn how traditional GPTs are being exploited and why security teams need to act now.
Read More
B Cursor
Learn how Abnormal has employed Cursor to optimize our enterprise codebase for LLMs, automate project rules, and build a security-first AI dev culture.
Read More
B X Files Fileless Malware
Learn how XFiles uses fileless malware, Cloudflare Turnstile widgets, and phishing emails to steal login details, cryptocurrency wallets, and access to corporate systems.
Read More
B Email Metrics
Understand essential email security metrics that reveal the strength of your protection and highlight areas for improvement in your security program.
Read More