Lovable SEO Review (May 2026): Does Lovable's New Prerendering and SSR Actually Work?
Summarize with AI
Get instant summaries and insights from this article using AI tools
On May 13, 2026, Lovable shipped a full SEO and AI search suite. It bundles six things into the builder: server-side rendering for new apps, automatic prerendering for every existing CSR app, per-page social previews, AI search optimization, Semrush data inside the builder chat, and an on-demand SEO review with one-click fixes. The announcement is at lovable.dev/seo-aeo.
The April 20 silent SSR rollout (covered in our previous post) only helped new projects. Today's launch announces a fix for the millions of existing CSR projects through automatic prerendering at the platform level. Whether that fix is actually live yet is the question we spent the day testing.
We spent the day testing it. Here is what works, what does not, and what it means if you are running a Lovable app today.
Update (May 13, 2026): On our first pass of testing across multiple legacy React + Vite Lovable apps, none of them appear to be prerendered for crawlers. Requesting each app with a Googlebot user agent (curl -A "Googlebot/2.1 (+http://www.google.com/bot.html)") and cross-checking with our Google Sees crawl simulator and the ToTheWeb search engine simulator returned the same empty React shell that a regular browser would render before JavaScript runs. This contradicts Lovable's announcement that prerendering is live for all existing apps with no opt-in. We are still investigating: it may be a gradual rollout, a per-plan rollout, or an issue specific to the apps we tested. We will update this post with a full evaluation of Lovable's prerendering and whether it is reliable enough for production SEO. Treat the rest of this post as a review of what Lovable says shipped, with our current testing notes inline. If you are relying on Lovable's prerendering today, test your own deployed app with a crawl simulator or a Googlebot user agent before you turn off any third-party prerendering layer.
Update (May 15, 2026): A member of Lovable's product team clarified on Discord how the prerendering actually works:
Pre-rendering is only served to verified bots (from Google, Bing etc) as we detect them. We do not rely solely on the User-Agent string for detection but use a number of other validation methods as well. That means that other agents e.g. a third party SEO scanner won't see it. The pre-rendering happens as we detect the agent meaning that it will include dynamic content as well.
— Anders, Lovable product team
This explains why our May 13 curl-with-Googlebot-UA tests returned the empty React shell. Lovable is doing bot detection with IP validation, almost certainly with a reverse DNS lookup against Google's and Bing's published bot IP ranges. Spoofing a user agent from outside that IP space will never see the prerendered HTML. That includes our Google Sees crawl simulator, ToTheWeb, and every other third-party SEO scanner.
This is a structural problem for verifying the claim. Outside testers cannot directly check whether Lovable serves real HTML to Googlebot or Bingbot, because the test would need to originate from Google's or Bing's own infrastructure. The signals available to us are (1) whether prerendering works for bots Lovable validates but that come from testable platforms (AI search crawlers, social previewers), and (2) indirect signals like Google Search Console indexing rates over the next several weeks.
We tested the first set across every platform we could probe directly. Running tally:
| Platform | Result | Notes |
|---|---|---|
| Untestable from outside | IP validation + reverse DNS. Cannot be independently verified by third-party scanners. | |
| Bing | Untestable from outside | Same as Google. |
| Claude | Not prerendered | Prompting Claude with Fetch <url> directly and summarize the content of the page returned no usable content from the page. |
| ChatGPT | Not prerendered | Same prompt returned no on-page content. ChatGPT fell back to non-relevant external citations. |
| Partial | OG title rendered correctly. OG image incorrect. | |
| Untestable (Facebook bug) | Facebook's Sharing Debugger returns 403 on the Lovable URL, which appears to be a known issue on Facebook's side rather than a Lovable problem. Cannot evaluate the OG preview through Facebook's official tool until that is resolved. | |
| X | Partial | Correct meta title and description rendered. OG image incorrect. Same failure mode as LinkedIn. |
| Semrush Site Audit | Partial | Crawl returns the correct meta titles, suggesting Semrush's audit crawler reaches the prerendered output (or at least the server-rendered head). Notable given Lovable's statement that "third party SEO scanner won't see it" — Semrush may be on Lovable's verified list given their existing partnership. |
What this means right now: Lovable's prerendering may work for Google and Bing on static routes. We cannot verify Google or Bing directly, but on every platform where we can observe Lovable's output indirectly (Semrush Site Audit, LinkedIn, X), only static routes return correct meta titles and descriptions. Dynamic content routes like /blog/[slug] do not appear to be prerendered at all. That contradicts Anders's explicit statement that "the pre-rendering happens as we detect the agent meaning that it will include dynamic content as well." It is also a much harder problem to solve: snapshot generation on-demand for arbitrary dynamic routes is meaningfully different from generating snapshots for a known set of static routes. On top of that, the prerender is not being served to AI crawlers like Claude and ChatGPT, both of which Lovable explicitly markets as in scope for its "AI search visibility" feature. Two contradictions of the public framing in 48 hours.
Compounding the verification problem, Lovable does not give customers any way to view what their own prerendered snapshots actually look like. Even users inside the Lovable builder cannot inspect the HTML that Googlebot would receive. They are asked to trust that the snapshot is fresh, accurate, and complete with no way to confirm. Hado SEO, by contrast, lets every customer pull up the cached snapshot for any URL on their site, exactly as the verified bot receives it. That is the difference between "we promise it works" and "here is what we served, with a timestamp." We will update this table as the remaining platforms come back.
TL;DR
- Prerendering for legacy apps did not work in our initial tests. Lovable announced it as live for every existing app on the previous stack, but multiple legacy React + Vite apps we tested still returned the empty React shell when requested with a Googlebot user agent. Investigation ongoing. Test your own app with a crawl simulator before assuming you are covered.
- The default project template for new apps is officially TanStack Start.
- The SEO review tool is solid for a checklist pass. Metadata, OG, headings, alt text, canonical, robots, sitemap. Free on all plans.
- Semrush in the builder chat is genuinely useful for ideation. Less useful as a substitute for a real keyword workflow. Free through August 15, 2026.
- "AI search visibility" is partly real, partly marketing. Structured markdown, semantic HTML, and an auto-generated llms.txt are real. Whether any of it actually moves AI citation rates is the open question, and recent SERanking research suggests at least the llms.txt half does not.
- For new single-app Lovable projects (SSR-confirmed), the built-in suite now covers the basics. Indexability, OG tags, and structured AI output ship by default. Legacy CSR apps are a separate story until Lovable's announced prerender actually goes live.
- Crawl observability, edge caching, redirect management, and multi-platform support are still gaps Lovable does not fill. That is the narrower set of problems Hado SEO solves now.
What Lovable Just Announced
Six features, grouped by what they actually do.
Rendering
SSR for new apps (TanStack Start). Pages are rendered as HTML on the server before the browser receives them. This was launched on April 20 and is reaffirmed in today's announcement. See our previous post on the SSR rollout for the technical detail.
Prerendering for legacy apps (announced). This is the headline of the launch. Lovable says every existing app built on the previous React + Tailwind stack is now served through an automatic prerendering layer, with static HTML snapshots delivered to crawlers in place of an empty <div id="root"></div>. No opt-in, no code change, no migration. Whether this is actually live for any given legacy app is something readers should verify on their own deployed apps. See the testing section below for what we found.
Tooling
On-demand SEO review. A built-in review tool that scans your app and reports on performance, metadata, heading structure, image alt text, canonical tags, OG tags, robots, and sitemap. Free on all plans, including the free tier. Applying the recommended fixes uses normal build credits.
One-click fixes. Most of the issues surfaced by the review can be fixed by the agent in one click. Same builder, same credits.
Semrush data in the builder chat. Real keyword rankings, traffic insights, and competitor data inside the chat, no Semrush account required. Lovable says this uses normal build credits and is free through August 15, 2026.
AI Search
Structured markdown, semantic HTML, structured data. Lovable says new output is designed to be readable by ChatGPT, Claude, Perplexity, and similar AI search tools. Per-page Open Graph tags are part of this push: social previews are now unique per page on LinkedIn, Slack, WhatsApp, and X.
Does the Prerendering Actually Work?
This is the question that determines whether today's launch matters. The promise is that every existing CSR Lovable app, the kind that has been notoriously difficult to get indexed, now serves real HTML to crawlers automatically.
We tested it on secure-landlord-lease.lovable.app, one of our legacy React + Vite Lovable apps, along with several other apps built on the previous stack.
How to Actually Test Prerendering (and Why View Source Is Not Enough)
Before the test results, a methodology note. Most prerendering setups, including the kind Lovable describes, use bot detection: the server inspects the incoming User-Agent header, serves a static HTML snapshot to known crawler user agents, and serves the normal CSR shell to everyone else. That means opening your app in Chrome and clicking View Source tells you almost nothing. You will see the same empty <div id="root"></div> whether your prerender is broken or perfectly configured, because your browser is not Googlebot.
The valid tests are the ones that actually look like a crawler:
# Request as Googlebot
curl -A "Googlebot/2.1 (+http://www.google.com/bot.html)" https://secure-landlord-lease.lovable.app
# Request as a normal browser, for comparison
curl -A "Mozilla/5.0" https://secure-landlord-lease.lovable.app
If prerendering is working and uses user-agent detection only, the Googlebot response will contain your rendered HTML (headings, body text, meta tags, OG tags) and the Mozilla response will return the React shell. If both responses are identical empty shells, your app is either not prerendered or the prerender uses stricter bot validation than user-agent alone.
You can run the same test in a browser-based crawl simulator instead of the command line. The options we use:
- Hado SEO Google Sees. Renders the page as Googlebot sees it, with the same user-agent and request shape.
- ToTheWeb Search Engine Simulator. Independent third-party simulator that mirrors what a search crawler retrieves.
Important caveat for Lovable specifically. As of the May 15 update from Lovable's product team (above), Lovable's prerendering validates more than the user-agent string. It checks the requester's IP against verified bot ranges, almost certainly via reverse DNS lookup. That means the user-agent-spoofing tests below will return the empty React shell for Lovable apps even if the prerender is working correctly for real Googlebot and Bingbot traffic. The tests are still valuable for other platforms (Replit, Bolt.new, Base44, self-hosted Vite) where prerendering setups typically rely on user-agent alone, and for testing whether Lovable serves prerendered HTML to AI crawlers and social previewers that come from testable infrastructure.
What Our Initial Tests Show
We ran the Googlebot user-agent test and the two simulators against multiple existing Lovable apps built on the previous React + Vite stack. Every one of them returned the empty React shell to the simulated crawler, with no rendered content, no headings, and no meta tags in the response body.
The response shape was consistent across the apps we checked. An empty <div id="root"></div>, the static <title> placeholder from the build, no meta description in the head, no body text, no Open Graph tags pointing at the per-page content, and no JSON-LD. In other words, the same payload that a normal browser receives before JavaScript runs.
Verdict on the initial test: Lovable's claim that prerendering is live for every legacy app does not match what crawl simulators are returning for the apps we tested. Investigation continues.
Universal Rendering or Bot-Sniffing?
The classic prerendering implementation question: does Lovable serve the prerendered HTML to every visitor, or only to detected bot user-agents (also known as dynamic rendering)? Dynamic rendering was officially deprecated by Google in 2023 but still works for most use cases. Universal rendering is cleaner.
The May 15 clarification from Lovable's product team answers this definitively: Lovable is doing dynamic rendering with multi-signal bot detection. They serve prerendered HTML only to bots they actively verify, using user-agent matching plus IP validation (likely reverse DNS against published bot IP ranges). Everyone else gets the React shell. That is why our user-agent-spoofing tests below return the same payload regardless of the agent string.
# Normal browser user-agent
curl -A "Mozilla/5.0" https://secure-landlord-lease.lovable.app
# Googlebot user-agent (will still return the React shell because the request is not from Google's IP space)
curl -A "Googlebot/2.1 (+http://www.google.com/bot.html)" https://secure-landlord-lease.lovable.app
Both responses return the same empty React shell. With dynamic rendering plus IP validation in play, that is the expected outcome for any request originating from outside a verified bot's IP range. It tells us the prerender is gated, not that it is broken. Whether it actually serves real HTML to Googlebot itself remains untestable from outside.
Cache Freshness
Snapshot-based prerendering has one durable failure mode: snapshots go stale. If you update your app and Lovable does not rebuild the snapshot, crawlers see old content.
Cache-freshness testing is pending. Until we find a Lovable app where the prerender is actually active, there is no snapshot to invalidate, no refresh interval to measure, and no manual refresh trigger to confirm. We will publish methodology and numbers in the follow-up update.
Coverage: Dynamic Routes, Query Strings, Auth
Dynamic content routes like /blog/[slug] do not appear to be prerendered. On the platforms where we can observe Lovable's prerendered output indirectly (Semrush Site Audit, LinkedIn, X), static routes return the correct meta titles and descriptions but dynamic routes do not. This contradicts Anders's statement that "the pre-rendering happens as we detect the agent meaning that it will include dynamic content as well." It is also the harder half of the snapshot-prerendering problem: generating snapshots on-demand for arbitrary dynamic routes is meaningfully more complex than generating them once for a known set of static routes, and the gap shows in our observations.
Query-string variants and auth-walled routes are still pending. Without a reliable way to inspect the prerendered output, those tests will produce more confident results once a snapshot viewer is available (either from us or from Lovable).
What the Prerender Does Not Include
Even where it works, the snapshot-based approach has structural limits:
- No crawl observability. You cannot see what Googlebot, Bingbot, GPTBot, ClaudeBot, or PerplexityBot actually requested, when they came, what status code they got, or what HTML was returned. Lovable's review is a checklist of what your site should look like. It is not a log of what crawlers actually saw.
- No bot-aware caching during deploy windows or traffic spikes. If your origin slows down or returns errors, snapshots help, but only if they are fresh. There is no documented caching tier tuned specifically for crawl-budget protection.
- No multi-platform coverage. If you also run apps on Replit, Bolt.new, or Base44, this launch does nothing for them.
- No edge-layer redirect control. 301 and 302 redirect management still needs to live somewhere outside the Lovable app code, or it ships as part of the build.
Verdict
The honest verdict, with the caveat that we are still investigating: Lovable's prerendering announcement and what is actually shipping to existing apps do not yet match. If you have a legacy Lovable app, do not turn off any third-party prerendering layer based on the announcement alone. Test your own deployed app with a crawl simulator like Hado SEO Google Sees or with curl using a Googlebot user agent. If the response contains your rendered HTML, the prerender is active for you. If you see an empty root div, you are still client-side rendered, and Lovable's launch has not changed your indexability situation despite what the announcement says.
We will update this section with a full evaluation, including cache freshness, bot-detection behavior, route coverage, and production reliability, once we have run the same battery of tests against an app that is actually being prerendered.
The SSR Story, Updated
We covered Lovable's TanStack Start SSR rollout on April 20. Three days later we added an update noting that some users were reporting their new projects still did not have SSR enabled, suggesting either an A/B test or a gradual rollout.
Today's announcement officially confirms TanStack Start as the default for new apps. From the launch page: "New apps are built on TanStack Start, which supports full server-side rendering (SSR). This means your pages are rendered as complete HTML before they reach the browser, so search engines and AI crawlers can read and index your content immediately." That language reads as universal rather than gated, which is a step up from the inconsistent rollout reports we noted on April 23. If you are spinning up a new project today, expect SSR to be on by default.
If you created a project on or after April 20 and want to confirm SSR is actually live, check your /src directory for router.tsx and routeTree.gen.ts, or view-source on your deployed page. The detail is in the April 20 post.
The framework choice (TanStack Start vs Next.js vs Remix) does not matter for SEO. What matters is that HTML is rendered server-side. For new Lovable projects, it now is.
Semrush in the Builder: How Useful Is It Really?
Lovable's pitch: type a question in chat, get real keyword data back from Semrush, and have the agent generate landing pages targeting those terms. No Semrush account, normal build credits, free through August 15, 2026.
What it is good at
- Ideation. "What keywords should I target?" returns a usable starting list with volumes and intent signals.
- Quick page generation. Asking the agent to build a page for a specific keyword produces a draft with optimized title, meta description, and H1 in one step.
- Competitor surface scan. "Who is ranking for X?" returns a list you can act on.
Where it is thinner than a full Semrush workflow
The in-builder chat exposes a Semrush-flavored slice of keyword data: rankings, search volumes, and a competitor surface for a given keyword or domain. What is not exposed: deep backlink data, historical SERP movement, full keyword clustering or topic modeling, and bulk keyword workflows ("cluster these 500 keywords by intent, then group by funnel stage"). For ideation and single-page targeting, the in-builder version is plenty. For multi-site, multi-language, or paid-traffic campaigns, you will still want the full Semrush UI or another dedicated tool.
The right framing for the reader: this is a great in-builder ideation layer. It is not a replacement for a serious SEO operator's keyword workflow if you are running multi-site, multi-language, or paid-traffic campaigns. For the average single-app builder, it is a real upgrade over no data at all.
The price tag
The "free through August 15, 2026" framing is worth attention. After that, expect Semrush queries to consume credits at a published rate, or to require a Lovable plan tier. Build your workflow assuming it will eventually be metered.
AI Search Visibility: Real Feature or Marketing Slide?
Lovable's claim: structured markdown output, semantic HTML, and structured data make your app readable by ChatGPT, Claude, Perplexity, and other AI search tools.
What is real
- Semantic HTML. Lovable's TanStack Start templates use proper
<article>,<section>,<nav>, and<main>elements with a clean heading hierarchy in the rendered output. For new SSR apps this lands in the initial HTML payload that crawlers receive. For legacy apps the same elements live in the post-JavaScript DOM, which AI crawlers like GPTBot and ClaudeBot may or may not execute. - Structured data. Lovable's templates ship JSON-LD blocks (Article, BreadcrumbList, FAQPage, Product where the page type applies). Same caveat as semantic HTML: the JSON-LD lives in the rendered output, so it reaches crawlers reliably only when SSR or a working prerender puts it in the initial response.
- Per-page Open Graph. Confirmed in the announcement. Each page now has its own og:title, og:description, og:image rather than a single site-wide preview.
- llms.txt. Lovable now generates an llms.txt manifest for each app as part of the AI search push. The honest caveat: recent SERanking research, "Does LLMs.txt impact your AI visibility and citations? No, according to research", found no measurable effect of llms.txt on AI search visibility or citation rates. It does not hurt to have one, and Lovable shipping it for free is a nice default. Treat it as table stakes, not a differentiator. The signal that actually matters is whether GPTBot, ClaudeBot, and PerplexityBot can read your rendered HTML at all, which loops back to the prerendering and SSR questions earlier in this post.
These are real, measurable improvements (with the llms.txt caveat above). If you are evaluated by an AI crawler that respects semantic structure (most do), your content will be parsed more cleanly than before.
What is still missing
- AI crawler observability. No reporting on which AI bots crawled, what they fetched, or how often. If you want to know whether GPTBot, ClaudeBot, or PerplexityBot actually visited and read a specific page, the Lovable suite does not tell you.
Honest framing
"AI search visibility" as Lovable ships it today is "we generate clean structured content for AI crawlers to parse." That is meaningful. It is not the same thing as a measurable AEO program with observability and submission control.
What This Means for Hado SEO Users
We are going to be direct here. What Lovable's launch changes depends entirely on which kind of Lovable app you have, and on a claim we have not yet been able to verify.
If your app is new (SSR via TanStack Start, confirmed)
For new projects, today's announcement officially confirms what the April 20 rollout started: SSR is the default. Google, Bingbot, and AI crawlers receive fully rendered HTML on the first request. For this case, Lovable's built-in suite covers a real chunk of what builders previously paid third-party tools for:
- Indexability without any setup.
- Per-page Open Graph and social previews.
- A pre-publish checklist for metadata, alt text, canonicals, robots, sitemap.
- Basic keyword research and quick page generation through Semrush in chat.
- Structured output and llms.txt for AI search crawlers.
If your only app is a new Lovable project, your goal is to show up on Google, and you have one custom domain, the built-in suite probably covers what you need now.
If your app is legacy (CSR on the previous stack)
This is the case the announcement is loudest about and the case our testing has not been able to confirm. Lovable says every existing app on the previous React + Tailwind stack is now prerendered automatically. The apps we tested (including secure-landlord-lease.lovable.app) still return the empty React shell to a Googlebot user agent and to the crawl simulators we use. Until we can verify the prerender is actually serving real HTML to crawlers, nothing has changed for legacy Lovable apps from a real-world indexability standpoint. The same CSR limitations that have kept these apps out of Google's index are still in effect.
What that means in practical terms: if you have a legacy Lovable app, do not assume you are covered by today's announcement. Test your own deployed app with a crawl simulator or curl using a Googlebot user agent. If you see an empty root div, you are still client-side rendered, and you still need a working prerendering layer to be indexed. Third-party prerendering, including Hado SEO, remains the only confirmed way to fix indexability for legacy CSR apps right now.
We will update this section the moment our testing flips. If Lovable's prerender goes live for legacy apps as announced, the calculus for those users changes to look more like the new-SSR case above.
Where Hado SEO adds value, regardless of which case you are in
Even if Lovable's prerender for legacy apps lands and works as described, the remaining set is narrower but sharper. These are the problems Lovable does not solve from inside the builder:
- Crawl observability across every major bot. Lovable's review tells you what your site should look like to a checklist. Hado SEO shows you what Googlebot, Bingbot, GPTBot, PerplexityBot, ClaudeBot, and others actually requested, when, what status code they got, and what HTML body was served. That is information Lovable does not provide and structurally cannot provide from inside the builder.
- SEO Trace (new). Lovable's review tells you that a page is missing a meta description or has a thin H1. Hado SEO's new SEO Trace tells you why a specific page is not ranking, is not indexed, or is losing traffic, with a streamed reasoning trace you can audit step by step. Each diagnosis pulls from GSC, SERP data, and your site's crawl history, returns a one-sentence root cause plus a ranked list of actions with impact and effort, and lives at a persistent shareable URL with checkable action items. It auto-reruns when rankings shift so you know whether a fix worked. This is the difference between a checklist of best practices (Lovable) and a verdict you can act on (Hado SEO).
- Edge caching tuned for crawl-budget protection. Snapshot prerendering helps when snapshots are fresh and the prerender is actually active. A real edge cache layer is tuned for bot traffic patterns and protects crawl budget during deploy windows, traffic spikes, and origin hiccups.
- Cross-platform coverage. Lovable's launch only helps apps hosted on Lovable. If you also run apps on Replit, Bolt.new, or Base44, those still need a prerendering layer. Hado SEO handles all of them with the same DNS setup.
- Edge-layer redirect and routing control. 301 and 302 redirect management at the proxy layer. Geo and language routing. Bot-specific rules. None of this exists in Lovable's chat.
- Deeper AEO than "structured markdown." Structured-data validation and AI-bot-specific crawl reports showing which AI crawlers fetched which URLs and when.
Honest verdict
If you are starting a brand new Lovable project today, your SEO needs are "show up on Google and look right when shared on LinkedIn," and you only run one app on one platform, Lovable's built-in suite probably covers you. If you have a legacy CSR Lovable app, the announcement does not yet match what we are seeing in practice, and a working third-party prerendering layer remains the difference between being indexed and not. In either case, Hado SEO is for builders who want observability, multi-platform coverage, and edge-layer control.
Quick Reference: What Changed and What Didn't
New as of May 13, 2026:
- Automatic prerendering for every existing CSR Lovable app, per the announcement. Our initial testing has not yet confirmed this is actually live on the apps we checked. Investigation ongoing.
- Per-page Open Graph and social previews.
- On-demand SEO review (free on all plans).
- One-click fixes through the builder agent.
- Semrush data in chat (free through August 15, 2026).
- Structured markdown, semantic HTML, and an auto-generated llms.txt for AI crawlers.
Unchanged:
- Existing projects are still on the previous React + Tailwind stack. Lovable announced they would be prerendered (not server-rendered) automatically. Our testing has not yet confirmed the prerender is active. Treat the indexability situation for legacy apps as unchanged until you verify on your own deployed app.
- No documented migration path from CSR + prerender to full SSR.
- Lovable's suite still only covers apps hosted on Lovable.
- No crawl observability layer.
- No edge-layer redirect management.
FAQ
Does Lovable have built-in prerendering now?
Lovable announced on May 13, 2026 that every existing app on the previous React + Tailwind stack is now prerendered automatically with no opt-in or migration. On May 15, a Lovable product team member clarified that the prerendering is served only to verified bots, with IP validation in addition to user-agent matching. That means third-party SEO scanners structurally cannot see it. Whether the prerendering actually serves real HTML to Googlebot and Bingbot is not something outside testers can verify directly.
Does Lovable's prerendering actually work?
It depends on which bot you ask. For Google and Bing, untestable from outside their IP space. For AI search crawlers, no: Claude and ChatGPT, asked to fetch and summarize a Lovable URL directly, return no on-page content. For social previewers, LinkedIn renders the correct OG title but the wrong OG image. Other platforms still in testing. See the multi-platform results table at the top of this post.
Do I still need Prerender.io or Hado SEO if I'm on Lovable?
It depends on which kind of Lovable app you have. For new projects on TanStack Start, SSR is confirmed and the built-in suite covers basic indexability. For legacy apps on the previous stack, Lovable announced automatic prerendering for verified search bots but explicitly says third-party scanners cannot see it, so independent verification is not possible from outside Google's and Bing's IP space. Our tests on AI crawlers (Claude, ChatGPT) and social previewers (LinkedIn) show the prerender is not reaching them. Until that gap is closed, a third-party prerendering layer remains the way to ensure AI search visibility and clean social previews across every platform. For crawl observability, edge caching tuned for crawl budget, redirect management at the proxy, and cross-platform support (Replit, Bolt.new, Base44), Hado SEO adds value regardless.
Will Lovable prerender my existing CSR app automatically?
That is the announcement, with the caveat that "automatic" means "for verified bots only." Lovable validates the requester via user-agent plus IP / reverse DNS, so third-party tools cannot independently verify it for Google or Bing. For AI crawlers and social previewers, our testing shows the prerender is not currently being served.
Can I migrate my existing Lovable project to SSR?
Not yet. Lovable says full SSR via TanStack Start is currently only available for new projects.
What does Lovable's SEO review check?
Performance, metadata, heading structure, image alt text, canonical tags, Open Graph tags, robots, and sitemap status. Free on all plans. Applying fixes uses build credits.
Is Semrush data inside Lovable's builder free?
Through August 15, 2026, yes. After that, Lovable has not published specific pricing or credit costs. Expect Semrush queries inside the chat to be metered in some form once the promotional window closes.
Does Lovable now optimize for ChatGPT and Perplexity?
Through structured markdown, semantic HTML, and structured data, yes. Whether that translates to measurable AI-search recommendations is a separate question that requires observability the built-in suite does not provide.
How is Lovable's prerendering different from a third-party prerendering service?
Lovable's prerendering is platform-bound and snapshot-based. A third-party layer operates at DNS or proxy, works across multiple builder platforms, and provides crawl logs and cache control.
The Short Version
Lovable's May 2026 SEO launch is a real step forward for new projects. TanStack Start SSR is officially the default, and new apps get indexability, OG tags, and structured AI output without third-party tooling. The on-demand SEO review and Semrush-in-chat are useful additions, not just marketing copy.
For legacy apps, the picture is less settled. Lovable announced automatic prerendering for every existing CSR project, but our initial testing has not confirmed the prerender is actually live. Until that changes, legacy Lovable apps still need a working third-party prerendering layer to be indexed, and the announcement does not yet move the needle in practice.
What Lovable's launch does not cover, in either case: observability across every major bot, edge-layer caching tuned for crawl budget, redirect and routing control outside the app code, multi-platform support, and an AEO program with actual measurement. If those matter to you, Hado SEO still does them, with one DNS change covering Lovable, Replit, Bolt.new, and Base44 in the same setup.