SEO Guide
Everything you need to know about search engine optimization for your Nexting-hosted pages. What we handle automatically, what you need to configure, and best practices.
What Nexting Handles Automatically
Every page served through the proxy includes these SEO elements out of the box. You don't need to configure any of this.
What You Need to Do
While Nexting handles on-page SEO automatically, there are a few things only you can do as the domain owner.
Set Up Google Search Console
Google Search Console is essential for monitoring how Google indexes your pages. Without it, you're flying blind.
- Go to Google Search Console
- Add your domain property (e.g.,
yourdomain.com) - Verify ownership via DNS TXT record (recommended) or HTML meta tag
- Submit your Nexting sitemap (see next step)
DNS Verification
Submit Your Sitemap
After verifying your domain, submit the Nexting-generated sitemap so Google knows about your pages.
In Google Search Console, go to Sitemaps and add:
https://yourdomain.com/resources/sitemap.xmlReplace /resources with your actual path prefix. If your prefix is /, the sitemap is at the root.
Multiple Sitemaps
Check Your Robots.txt
Make sure your main site's robots.txt doesn't block the path prefix where Nexting pages live.
User-agent: *
Allow: /resources/
Allow: /blog/User-agent: *
Disallow: /resources/
Disallow: /blog/Set Up Bing Webmaster Tools (Optional)
Many AI search engines (including ChatGPT and Perplexity) rely on Bing's index. Setting up Bing Webmaster Tools can improve AI discoverability.
- Go to Bing Webmaster Tools
- You can import your site directly from Google Search Console
- Submit the same sitemap URL
SEO Health Checklist
Use this checklist to verify everything is set up correctly after deploying your pages.
| Check | How to Verify |
|---|---|
| Pages return 200 status | Visit your page URL directly — it should load without redirects or errors. |
| Canonical URL points to your domain | View page source and search for <link rel="canonical">. It should show your domain, not nexting.ai. |
| Sitemap is accessible | Visit yourdomain.com/[prefix]/sitemap.xml in your browser. You should see an XML file listing your pages. |
| Robots.txt allows crawling | Visit yourdomain.com/robots.txt and ensure your path prefix is not disallowed. |
| Sitemap submitted to Google | In Google Search Console → Sitemaps, check that status shows "Success". |
| Pages appear in Google index | Search site:yourdomain.com/[prefix] on Google. Results may take days to weeks to appear. |
| Structured data is valid | Use Google's Rich Results Test (search.google.com/test/rich-results) with your page URL. |
| Meta description is set | View page source and look for <meta name="description">. Fill in descriptions in the Nexting dashboard for best results. |
AI Search Visibility
Beyond traditional search engines, your pages are optimized for AI-powered search platforms like ChatGPT, Perplexity, Google AI Overviews, and Claude.
How AI Search Finds Your Content
AI search engines discover your content through a combination of:
- Web crawling — AI bots regularly crawl indexed websites
- Search engine indexes — Many AI platforms use Google or Bing results as a data source
- Structured data — JSON-LD helps AI understand your content's meaning and context
- LLMS.txt — A machine-readable summary of your site, specifically designed for LLMs
What Nexting Does for AI Crawlers
When an AI crawler visits your pages, Nexting serves a special optimized version:
- Clean semantic HTML without CSS frameworks or JavaScript
- Structured with
<article>,<header>, and<section>tags - Clear headings, dates, and metadata for easy extraction
- Robots.txt explicitly allows all major AI crawlers
LLMS.txt (Optional)
You can enable LLMS.txt in your project settings. This generates two machine-readable files:
/llms.txt— Page titles and descriptions/llms-full.txt— Full page content in plain text
These files help LLMs quickly understand what your site offers without crawling every page.
Timeline Expectations
Tips & Best Practices
Do
- Write unique, valuable content that answers real questions — AI search engines prioritize authoritative, helpful content.
- Fill in meta descriptions for every page — they appear in search results and help both users and AI understand your content.
- Use descriptive page paths like /guide/seo-basics instead of /page-1 — URLs signal relevance to search engines.
- Include relevant keywords naturally in your content — don't stuff them, but do use the terms your audience searches for.
- Keep content up to date — search engines prefer fresh, recently-modified content. Use the dashboard to update pages regularly.
- Add your sitemap to Google Search Console and Bing Webmaster Tools for faster discovery.
- Build backlinks by sharing your content on relevant forums, social media, and industry directories.
Don't
- Don't block Nexting page paths in your robots.txt — this prevents search engines from indexing your content.
- Don't set up 301 redirects on the proxy paths — rewrites (200 status) are required, not redirects.
- Don't duplicate the same content across multiple pages — search engines penalize duplicate content.
- Don't leave meta descriptions empty — while not critical, missing descriptions mean Google generates its own snippet, which may not be ideal.
- Don't expect instant results — SEO is a long-term strategy. It typically takes weeks to months to see significant organic traffic.
- Don't use cloaking or hidden text — search engines will penalize your entire domain, not just the affected pages.
Common Questions
How long until my pages appear on Google?
Will Nexting pages affect my existing site's SEO?
Do I need to set up Google Search Console for every project?
Why are my pages showing as "Discovered — currently not indexed"?
Can AI search engines find my pages if Google hasn't indexed them?
How do I check if an AI search engine knows about my content?
Auto-Generated SEO Files
Nexting generates these files automatically for each project. They're served through the same proxy, so they appear on your domain.
| File | URL | Purpose |
|---|---|---|
| sitemap.xml | yourdomain.com/[prefix]/sitemap.xml | Lists all published pages for search engines |
| robots.txt | yourdomain.com/[prefix]/robots.txt | Crawler access rules and sitemap reference |
| llms.txt | yourdomain.com/[prefix]/llms.txt | Machine-readable site summary for AI models |
| llms-full.txt | yourdomain.com/[prefix]/llms-full.txt | Full content export for AI models |
Sitemap Index