robots.txt is a file that tells search engine crawlers (like Googlebot) which parts of your site they can and cannot access. It helps you:
Laioutr uses the @nuxtjs/robots module (part of Nuxt SEO) to manage robots.txt and robots directives. The module is automatically installed with @laioutr-core/frontend-core, so every Laioutr frontend has robots.txt support out of the box.
The robots.txt feature works at two levels:
/robots.txt endpoint that lists which paths crawlers can and cannot access. This is the traditional robots.txt file that crawlers check first.noindex, nofollow) via the page variant’s SEO settings in Studio. This is rendered as both:
<meta name="robots" content="..."> tag in the HTMLX-Robots-Tag HTTP headerSo you can control crawling globally (via robots.txt rules) and per-page (via the page variant’s SEO robots field).
In Cockpit (Studio), when you edit a page variant, you can set a robots value in the SEO section. This value is stored in the page variant’s seo.robots field and used by PageRenderer to set the robots meta tag and header for that page.
Common values:
index, follow – Allow indexing and following links (default for most pages).noindex, follow – Don’t index this page, but follow links on it.index, nofollow – Index this page, but don’t follow links.noindex, nofollow – Don’t index and don’t follow links (e.g. for checkout, account pages).If you don’t set a robots value in Studio, the page uses the default (typically index, follow unless overridden in your Nuxt config).
To configure the global robots.txt file (which paths are allowed/disallowed), you can set options for the @nuxtjs/robots module in your nuxt.config.ts:
// nuxt.config.ts
export default defineNuxtConfig({
robots: {
// Disallow specific paths globally
disallow: ['/checkout', '/cart', '/account'],
// Allow specific paths (if you want to be explicit)
allow: ['/'],
// User agents (defaults to all: '*')
// You can also set rules per user agent
},
});
The module also automatically disables indexing for non-production environments (based on Nuxt’s site config), so your dev and staging sites won’t be indexed by search engines. This helps avoid duplicate content issues.
For more configuration options, see the Nuxt Robots documentation.
You can use Nuxt route rules to set robots directives for specific routes:
// nuxt.config.ts
export default defineNuxtConfig({
routeRules: {
'/checkout/**': {
robots: 'noindex, nofollow',
},
'/account/**': {
robots: 'noindex, nofollow',
},
},
});
Route rules take precedence over the global robots.txt config, so you can fine-tune per route pattern.
For dynamic configuration (e.g. based on request headers or runtime conditions), you can use Nitro hooks to modify robots rules at runtime. See the Nuxt Robots Nitro API documentation for details.
The @nuxtjs/robots module integrates with other Nuxt SEO modules:
noindex are automatically excluded from the sitemap.So if you add these modules to your frontend, they will respect your robots directives automatically.
robots key (or via route rules).For detailed configuration options and advanced usage, see the Nuxt Robots documentation.
OG Image
Generate social media preview images (og:image) for your Laioutr frontend using Vue templates. Create dynamic, branded preview images that appear when links are shared on social platforms.
Schema.org
Generate JSON-LD structured data for your Laioutr frontend to enable rich snippets in Google search results. Add structured data for products, organizations, breadcrumbs, and more.