• Employees from Nvidia, Microsoft, Uber, and Spotify held active accounts on Lovable
  • Vercel’s April 2026 breach came via a compromised third-party AI tool, not a flaw in Vercel’s own code
  • 24% of enterprise leaders deploy agentic AI today without proper runtime security controls
  • Security teams can scan Docker images with Snyk before deploying on self-hosted platforms

Two platforms reported significant security incidents in the same month, April 2026. Both are used by enterprise teams. Both traced their failures back to architectural decisions at the platform level, not individual developer mistakes.

Lovable, a vibe coding builder used by engineers at major technology companies, faced an access control failure that left project data readable to unauthorized users. Vercel confirmed a separate breach rooted in a compromised third-party AI tool that gave attackers a path into customer environment variables.

For teams evaluating platforms for internal tools, both incidents raise the same question: how much of your security posture depends on platform architecture, and how much control does your team actually have? This post examines both incidents, the structural factors behind them, and what enterprise teams should look for differently when selecting platforms for production use.

The Lovable Breach: Direct Data Exposure Through Access Control Failures

In April 2026, Lovable faced a Broken Object Level Authorization vulnerability. This flaw allowed a user with a free account to access another user’s source code, database credentials, and AI chat history specifically for projects created before November 2025. The vulnerability was reported to the company 48 days before researchers went public with their findings.

“According to The Next Web, the researcher disclosed the flaw to Lovable 48 days before going public, yet projects created before ToolJet approaches access control November 2025 remained accessible throughout that entire window.”

What makes this particularly significant for enterprise security teams is the profile of affected accounts. Employees from Nvidia, Microsoft, Uber, and Spotify had active Lovable accounts. Any internal prototype, API key, or database schema built on the platform before the November 2025 cutoff could have been visible to any free-tier user during that period.

Lovable’s initial public response described the exposed data as “intentional behavior,” a position the company later revised with a partial apology.

This was not the platform’s first reported security issue. In 2025, CVE-2025-48757 found insufficient row-level security policies across 170 applications and 303 endpoints, earning a CVSS score between 8.26 and 9.3.

That was a separate issue from the April 2026 BOLA flaw. Understanding both as part of a pattern is a relevant context for any team evaluating the platform for production use. The April 2026 breach was the third significant incident within thirteen months.

Working on internal tools that involve customer records or sensitive credentials? See how ToolJet approaches access control at the platform framework level, as one example of how deterministic platforms handle this layer.

The Vercel Breach: A Supply Chain Attack via a Third-Party AI Tool

lovable-and-vercel-security-breach

Vercel’s April 2026 incident started entirely outside Vercel’s own systems. A third-party AI tool called Context.ai, used by a Vercel employee, was compromised. The attacker used that access to take over the employee’s Google Workspace account, pivoted into their Vercel account, and from there into a broader Vercel environment.

Environment variables classified as “non-sensitive” were decrypted and accessed. However, some of these variables still contained API keys and database credentials, highlighting gaps in how sensitive data was categorized.

“According to Trend Micro, the Vercel incident was an OAuth supply chain attack. The attacker compromised a third-party AI tool and used implicit OAuth trust to pivot into Vercel’s production environment and access customer credentials.”

Vercel’s own code was not directly at fault here. The attacker found a trusted adjacent service with OAuth access and used it as an entry point. This is supply chain risk in a specific, modern form.

As AI tools become integrated into developer workflows, they create OAuth trust relationships that aren’t always visible to the teams they affect. This isn’t about Vercel being uniquely vulnerable.

Any SaaS development platform extends your security perimeter to include every external service the vendor or its employees connect to. That’s a structural reality of the SaaS deployment model. Understanding this distinction is useful when evaluating self-hosted vs SaaS low-code security for production deployments.

Did You Know? Reddit discussions following the Vercel incident show enterprise developers asking specifically which external AI tools their platforms and vendors connect to, and whether those integrations are visible or auditable from the customer side.

Why Agentic AI Builders Create Different Security Risks

Both incidents fit within a broader conversation about what it means to build production applications on agentic AI platforms. These tools reason toward outcomes and generate execution paths dynamically, rather than following explicitly defined, reviewable logic. That architectural difference affects how security is enforced across every application the platform produces.

“According to Gartner, 40% of enterprise apps will feature task-specific AI agents by 2026, up from less than 5% in 2025. For enterprise teams, the question is less whether to use AI and more how security is enforced when AI generates the application logic.”

In a deterministic low-code platform, authentication is enforced at the framework level and applies uniformly to every application. Access controls aren’t generated per app by AI. Audit logs are consistent because the platform defines what an action is, across every app it runs.

In an agentic builder, those controls depend entirely on what the AI includes when it generates each application. A security audit of Lovable-generated apps by AI Thinker Lab identified 16 vulnerabilities: 5 critical, 7 high, and 4 medium.

The flaws included absent CSRF tokens, API keys in client-side JavaScript, and broken access controls between users. None of these are exotic vulnerabilities. All fall within standard OWASP categories that a pre-production security review would typically surface.

Their consistent appearance points to a structural pattern: AI builders optimize for producing functional output, and security completeness is often not in the prompt.

This doesn’t mean agentic AI tools have no place in enterprise workflows. For prototyping and internal experimentation, the speed advantage is real. The question becomes more difficult when those tools are brought into production environments handling regulated data or credentials.

Evaluating platforms for a regulated environment? Review the enterprise readiness criteria that security and procurement teams apply when assessing low-code platforms.

What These Incidents Mean for Enterprise Platform Decisions

The risk profile here depends significantly on how these tools are being used and what data they handle. For developer prototyping with non-sensitive data, the risk is manageable. For production workloads touching financial data, healthcare records, or regulated systems, the calculus is different.

“According to Deloitte, 66% of organizations report productivity gains from AI, yet only 20% have achieved revenue growth. The gap frequently comes down to governance and execution discipline, not AI capability or speed.”

Enterprise teams selecting a production platform should ask a consistent set of questions before committing:

  • Where is security enforced? At the framework level, applying uniformly to every application or at the application generation level, dependent on AI producing correct outputs each time?
  • What is the supply chain trust model? Which external services have OAuth access to the deployment environment, and is that visible to your security team?
  • Is the deployment independently auditable? Does the team have access to meaningful logs if something goes wrong?

These questions don’t point to a single platform as the only right answer. But they do separate platforms designed with governance as a first principle from platforms designed primarily for speed.

The low-code adoption data heading into 2026 shows deployment accelerating quickly. That makes these evaluation criteria more important to get right before the first production deployment, not after.

Did You Know? r/EnterpriseTech threads on AI platform adoption consistently surface the same tension. Teams want the speed of AI tooling but need the auditability their compliance requirements demand. Platforms that can offer both tend to win enterprise procurement decisions.

How Self-Hosting Reduces Supply Chain Risk

The Vercel breach makes a practical case for one specific architectural choice: controlling your own deployment environment. When a team self-hosts their tooling platform using Docker or Kubernetes, they own every dependency in the pipeline. Self-hosting can reduce vendor-side OAuth exposure, but teams still need to govern any third-party tools they connect to their own environment. Without vendor-managed OAuth trust chains, this specific class of third-party vendor-side exposure becomes less likely.

“According to KPMG, the challenge for enterprises in 2026 isn’t deploying AI. It’s orchestrating it across systems at scale without inadvertently expanding the attack surface in the process.”

Self-hosting also enables something SaaS deployment doesn’t: pre-deployment security verification. When a team manages its own Docker-based deployment, it can run a Snyk container scan before deploying to any environment.

That scan surfaces known CVEs in base image layers, outdated packages with published exploits, secrets accidentally baked into image layers, and license compliance issues across the dependency tree. The team gets a verification record. The security team reviews it. Nothing moves to production without that gate being cleared.

For compliance-driven environments, the combination of pre-deploy scanning and runtime audit logs gives teams the documentation chain they need when questions arise. What was deployed, when, and its security status at deployment time. What happened inside the platform afterward, and who did what.

That chain is genuinely difficult to construct on a SaaS platform where the vendor controls the deployment lifecycle entirely. Platforms like ToolJet that support Docker and Kubernetes self-hosting give teams that control back without requiring them to build tooling infrastructure from scratch.

Looking at self-hosted deployment options for your internal tooling stack? ToolJet’s deployment documentation covers Docker, Kubernetes, AWS, GCP, and Azure, including air-gapped environments for stricter isolation requirements.

How Deterministic Low-Code Platforms Handle Security Differently

The differences between agentic AI builders and deterministic low-code platforms are worth laying out clearly for teams making procurement decisions. These aren’t marketing distinctions. They reflect real structural differences in how security is enforced and who controls the infrastructure.

“According to EY, 24% of enterprise leaders are already deploying agentic AI in their organizations, yet most haven’t built the governance infrastructure needed to manage autonomous AI decisions at production scale.”

Security Dimension Agentic AI Builders (e.g. Lovable) Self-Hosted Low-Code (e.g. ToolJet)
Authentication Enforcement Generated per app by AI Implemented at framework level, uniform
Access Control Dependent on AI output each time Role-based, configured at workspace and app level
Audit Logs Typically unavailable Available on Enterprise plan, captures consistently
Deployment Model SaaS, vendor-managed Self-hosted options via Docker, Kubernetes, cloud
Supply Chain Trust External AI tools in OAuth pipeline Controllable when self-hosted
CSRF and Input Validation Frequently absent in generated apps Framework-enforced, not generated per app
Pre-Deploy Scanning Limited visibility into vendor-managed infrastructure Feasible via Docker image scanning with Snyk
Codebase Visibility Closed AGPL v3, open source
Incident Response Timeline Vendor-managed Internal team, internal timeline

ToolJet is one example of a deterministic enterprise low-code platform that follows this architecture. Authentication, RBAC, and session handling are enforced at the framework level rather than generated by AI per application.

That doesn’t make it immune to all security challenges (no platform is), but it does mean the specific failure modes seen in the Lovable incident are structurally less likely.

For teams in regulated industries, ToolJet also offers HIPAA-aligned deployment, SOC 2 Type II certification, and an open source codebase under AGPL v3 that teams can review directly. The Forrester Wave analysis of low-code platforms covers how governance capabilities are increasingly weighted in enterprise evaluations, a trend the Lovable and Vercel incidents are likely to accelerate.

Assessing how ToolJet compares to other platforms on security and deployment model? This platform comparison covers the options enterprise teams most commonly evaluate alongside ToolJet.

What Vibe Coding Misses and What ToolJet’s Infrastructure Gives You Instead

The incidents at Lovable and Vercel both trace back to the same underlying condition: teams had no meaningful control over where their data went or who could access their infrastructure. Vibe coding tools are built around speed as the primary value proposition, and infrastructure control is simply not part of the model. That works for many teams and many contexts, but for organizations operating under data residency requirements, it rules the platform out before the conversation even starts.
The two diagrams below show what that control looks like inside ToolJet’s self-hosted deployment, and how the architecture changes between the default and enterprise configurations.

ToolJet AI Cloud: Default Self-Hosted Configuration

lovable-and-vercel-security-breach

In the default self-hosted setup, your team prompts AI through the App Builder, but those prompts exit your environment and route through ToolJet’s AI Cloud, which processes them using ToolJet’s Anthropic API key. It’s a functional starting point for teams with lower data sensitivity requirements, but full isolation is not in place.

ToolJet AI Enterprise: Fully Isolated Configuration

lovable-and-vercel-security-breach

The enterprise configuration changes the architecture entirely. The ToolJet AI Server moves inside your infrastructure, alongside the App Builder. Your prompts never leave your network. The AI connects directly to Anthropic Claude using your own API key, with ToolJet’s servers removed from the request path at every step.

Here is what the enterprise configuration gives you:

  •   No prompts leave your network
  •   ToolJet’s servers not in the request path
  •   AI processing runs on your own infrastructure
  •   You use your own Anthropic API key
  •   Complete data isolation within your environment

For teams in healthcare, finance, or any regulated environment where data residency is a hard requirement, this isn’t a nice-to-have. It’s the only configuration that meets the bar. Vibe coding platforms don’t offer an equivalent. ToolJet’s self-hosted enterprise mode was built for exactly the teams those platforms can’t serve.

How to Choose a Secure AI Platform for Enterprise Use

The Lovable and Vercel incidents are worth examining not to dismiss agentic AI tools, but to understand where they fit in the enterprise stack and where they introduce risk that requires deliberate management.

Lovable’s breach showed what happens when security controls depend on AI generating them correctly for each application. Vercel’s breach showed what happens when third-party AI tool integrations create unmonitored trust relationships in a deployment pipeline.

Both risks are addressable with the right architectural choices. Self-hosted deployment options remove the supply chain exposure the Vercel breach demonstrated. Framework-level security enforcement removes the dependency on AI-generated controls that contributed to Lovable’s repeated incidents.

Built-in SSO, granular RBAC, and enterprise-grade audit logging give teams the documentation trail that compliance environments require.

For teams where those requirements are real, platform selection comes down to architecture, not marketing claims. ToolJet is one platform built around these principles, and it’s worth evaluating against your own requirements. The more important step is knowing which questions to ask before committing to any platform. The incidents covered in this post are useful guides for exactly that conversation.