As AI increasingly takes over the work of modern programmers, a new wave of vulnerabilities has emerged with it. Security researcher Dor Zvi and his team at RedAccess found more than 5,000 vibe-coded web applications created using tools like Lovable and Replit that had virtually no security or authentication.
These apps allowed anyone to access sensitive data, including medical information, financial details, and corporate documents. The lack of robust security measures means that organisations are inadvertently leaking private data through these platforms.
The ease with which researchers identified these vulnerable applications was surprising given the widespread use of AI-driven coding tools. Zvi’s findings highlight a critical oversight in how these tools manage user configurations, allowing apps to be publicly accessible without any warning or recourse for creators to change settings.
Companies like Replit and Lovable pushed back on the researchers’ claims, citing privacy settings as user-configurable options that should be managed by individuals. However, this does not absolve them of responsibility in ensuring adequate security measures are in place by default.
The incident raises serious questions about the state of web security in an age where AI is increasingly automating complex tasks. As more businesses rely on these tools to develop applications quickly and easily, it’s crucial that they also invest in robust security practices to prevent data breaches and privacy violations.







