Gabriel Mustiere
Gabriel Mustiere
Portrait of Gabriel Mustiere
Freelance
CTO

Tech · Business · AI. Nantes — remote.

Background
№ 016 min

How I built this site with Claude and Astro

A look back at how mustiere.fr came together: design with Claude, implementation with Claude Code and Astro, and why the SEO work happened inside the content schema rather than in keyword stuffing.

§ contents
  1. 01Step 1 — design with Claude Design, not with Figma
  2. 02Step 2 — implement with Claude Code and Astro
  3. 03SEO focus — six non-negotiable building blocks
  4. 1. A strict content schema
  5. 2. Systematic hreflang and canonical
  6. 3. JSON-LD for every entity type
  7. 4. AI-friendly sitemap and robots.txt
  8. 5. An llms.txt for the LLM era
  9. 6. Static, light, fast
  10. 04What I’d do differently
  11. 05The code is open

This site had existed for three years as a single frozen HTML page. I wanted a real blog, bilingual, clean on SEO, without turning myself into a full-time frontend dev. Two weekends got me there, powered by two tools: Claude Design for the design, Claude Code for the Astro implementation. Here is what I kept from the process, with a strong focus on search — that’s where most CTO portfolios quietly fail.

Step 1 — design with Claude Design, not with Figma

I am not a designer. Opening Figma to iterate on a homepage has a high friction cost. Claude Design, on the other hand, produces HTML + Tailwind that renders in a browser immediately. The loop:

  1. I describe the visual identity (editorial, sober, mixed typography serif + sans + mono) and the expected sections.
  2. Claude Design returns three standalone HTML artifacts: home, blog list, article.
  3. I iterate in plain language (“the sidebar is too dense”, “drop the Tweaks panel”, “shift this section’s accent to ochre”).
  4. At the end, I ask Claude to write a handoff document: design tokens, oklch palette, recommended Astro structure, JS behaviors.

That handoff lives in the repo’s README.md — I use it as the brief feeding Claude Code. The HTML mockups become the pixel-perfect fidelity reference; Claude Code just has to reimplement them as idiomatic Astro components.

Step 2 — implement with Claude Code and Astro

Why Astro? Three concrete reasons for a content site:

  • Zero JS by default. What I don’t ship can’t break, won’t tank Core Web Vitals, and won’t force me to hydrate islands just to render text.
  • Typed content collections. Each article is an .mdx file validated by a Zod schema. The build fails if a field is missing — no forgotten date silently breaking the sitemap in production.
  • First-class i18n. A single block in astro.config.mjs gives me two clean trees /fr and /en with no bespoke router.

Claude Code thrives on this kind of codebase: wide reads of the structure, targeted edits, respect for the project’s conventions. I work plan-then-execute: I ask for a breakdown into small tasks first, review it, then let the agent run.

SEO focus — six non-negotiable building blocks

For a freelance CTO site, SEO is not a marketing plan. It’s the hygiene that makes Google, Perplexity, ChatGPT and recruiters find the right pages in the right order. Here is what carries 90% of the outcome.

1. A strict content schema

The first SEO building block isn’t in the HTML — it’s in the data model. Every article declares required fields through Zod; the build breaks otherwise:

// src/content.config.ts
const blog = defineCollection({
  loader: chapteredGlob({
    base: './src/content/blog',
    extensions: ['.mdx', '.md'],
  }),
  schema: ({ image }) =>
    z.object({
      title: z.string().max(120),
      excerpt: z.string().min(80).max(220),
      publishedAt: z.coerce.date(),
      updatedAt: z.coerce.date().optional(),
      category: z.enum(['IA', 'Tech', 'Lead', 'Business']),
      tags: z.array(z.string()).default([]),
      keywords: z.array(z.string()).default([]),
      cover: image().optional(),
      resume: resumeSchema, // injected from resume.mdx
      faq: z.array(faqItem).default([]), // injected from faq.mdx
      sources: z.array(sourceItem).default([]), // injected from sources.mdx
      number: z.number().int().positive(),
      lang: z.enum(['fr', 'en']).default('fr'),
      translationOf: z.string().optional(),
    }),
});

excerpt is capped between 80 and 220 characters because that’s the usable range for a meta description. resume is not a frontmatter string: it’s a { markdown, html, plain } object injected by chapteredGlob from a reserved resume.mdx file at the article folder root — same mechanic for faq.mdx (structured questions, surfaced as JSON-LD FAQPage) and sources.mdx (verifiable citations). You edit content, not YAML, and the result stays usable for LLMs through resume.plain. translationOf links two language versions of the same article — the missing piece for hreflang tags done right.

Multi-chapter folder format. Past 1,500 words, I split the article into an index.mdx (frontmatter + intro) plus NN-<kebab>.mdx chapters that the loader concatenates alphabetically. This very article uses that layout. The public slug never changes — it’s the folder name — but writing and reviewing a long piece file by file beats wrestling with a single 2,000-line .mdx.

2. Systematic hreflang and canonical

A poorly-tagged bilingual site shoots itself in the foot: Google indexes both versions as duplicates. The BaseLayout emits a canonical and two hreflang tags on every single page, based on the content’s translationOf:

<link rel="canonical" href={canonicalURL} />
<link rel="alternate" hreflang={otherLang} href={altUrl} />
<link rel="alternate" hreflang={lang} href={canonicalURL} />
<link rel="alternate" hreflang="x-default" href={lang === 'fr' ? canonicalURL : altUrl} />

x-default points to the French version — that’s my primary. Without this block, a search engine has no way to decide which page to serve to an English reader.

3. JSON-LD for every entity type

Structured data is still under-used by developers and heavily consumed by every AI crawler out there. I ship three schemas on each article: Person, BlogPosting, BreadcrumbList.

// src/utils/schema.ts (excerpt)
export function blogPostingSchema(p: BlogPostingInput) {
  const lang = p.lang ?? 'fr';
  const url = `${SITE.url}${localizedPath(lang, `/blog/${p.slug}`)}`;
  return {
    '@context': 'https://schema.org',
    '@type': 'BlogPosting',
    '@id': `${url}#article`,
    mainEntityOfPage: { '@type': 'WebPage', '@id': url },
    url,
    headline: p.title,
    description: p.description,
    datePublished: p.publishedAt,
    dateModified: p.updatedAt ?? p.publishedAt,
    inLanguage: LANG_META[lang].bcp47,
    articleSection: p.category,
    keywords: p.keywords.join(', '),
    timeRequired: p.readingTime ? `PT${p.readingTime}M` : undefined,
    author: { '@id': `${SITE.url}/#person` },
    publisher: { '@id': `${SITE.url}/#person` },
  };
}

The same Person is referenced everywhere via @id, never duplicated. That’s what Google wants: a graph, not a pile of disconnected JSON blobs.

4. AI-friendly sitemap and robots.txt

Two Astro integrations do 95% of the work. The sitemap is generated at build time with declared locales; robots.txt explicitly allows the AI crawlers I want citing my content.

// astro.config.mjs (excerpt)
integrations: [
  sitemap({
    i18n: { defaultLocale: 'fr', locales: { fr: 'fr-FR', en: 'en-GB' } },
    changefreq: 'weekly',
    priority: 0.7,
  }),
  robotsTxt({
    sitemap: [`${SITE.url}/sitemap-index.xml`],
    policy: [
      { userAgent: 'GPTBot', allow: '/' },
      { userAgent: 'ClaudeBot', allow: '/' },
      { userAgent: 'PerplexityBot', allow: '/' },
      { userAgent: 'Google-Extended', allow: '/' },
      { userAgent: 'Bytespider', disallow: '/' },
      { userAgent: '*', allow: '/', disallow: ['/404'] },
    ],
  }),
],

Blocking Bytespider is not ideology: it’s a noisy crawler that burns bandwidth without citing sources. Letting GPTBot, ClaudeBot and PerplexityBot in, on the other hand, means accepting to be cited by the assistants my prospects already use.

5. An llms.txt for the LLM era

Nobody knows yet whether the llms.txt standard will stick, but it costs almost nothing and solves a real problem: an LLM crawling my site needs an editorial map, not an XML sitemap.

// src/pages/en/llms.txt.ts (excerpt — FR sits at the root since it's the default locale)
export const GET: APIRoute = async () => {
  const posts = (await getCollection('blog', (entry) => isPublished(entry, 'en'))).sort((a, b) => b.data.publishedAt.getTime() - a.data.publishedAt.getTime());

  const lines: string[] = [`# ${SITE.name}`, '', '## Articles', ''];
  for (const post of posts) {
    lines.push(
      `- [${post.data.title}](${SITE.url}/en/blog/${post.id}/) ` + `(${toISODate(post.data.publishedAt)}, ${post.data.category}): ${post.data.excerpt}`
    );
  }
  return new Response(lines.join('\n'), {
    headers: { 'Content-Type': 'text/plain; charset=utf-8' },
  });
};

The strict excerpts from the Zod schema (point 1) become usable summaries here — a model can reason over my catalog without loading every article. As a companion, src/pages/llms-full.txt.ts emits the full markdown corpus of the site — the format Jeremy Howard (the original author of llms.txt) pushes for, so LLMs can reason over the whole content set without crawling page by page. Marginal build cost, real upside if either standard sticks on the indexing side.

6. Static, light, fast

The first five points are worthless if the page takes 4 seconds to render. Astro ships plain HTML, Tailwind v4 emits only the CSS actually used, and three options in astro.config.mjs finish the job:

build: { inlineStylesheets: 'auto', format: 'file' },
prefetch: { prefetchAll: false, defaultStrategy: 'hover' },
trailingSlash: 'never',

inlineStylesheets: 'auto' inlines the critical CSS into the initial HTML — no round-trip before first paint. prefetch on hover preloads the target page the moment the mouse enters a link. trailingSlash: 'never' avoids the 301 hops between /blog/ and /blog that wreck PageSpeed scores.

What I’d do differently

  • Start with the schema, not the design. A keywords field forgotten up front gets paid back as a migration later.
  • Write the /llms.txt page on day one. It’s five lines of code and it forces clean excerpts everywhere.
  • Measure mobile Lighthouse on every commit. I have a pa11yci and a lighthouserc.json in the repo; I should have wired them in sooner.

The code is open

The full repo is public: github.com/gabrielmustiere/mustiere.fr. If you want to clone the structure for your own consultant site, help yourself — the schema, the SEO utilities and the i18n skeleton are directly reusable.

§ faq

Frequently asked questions

Why Astro rather than Next.js, Hugo or a classic static site generator?
For a content site that almost never needs interactive JS, Astro hits the sweet spot: zero JS by default (so nothing to hydrate, nothing to break Core Web Vitals), typed content collections with Zod validation that breaks the build on a missing field, and first-class i18n in the config. Next.js is overkill when you don't need a React runtime, Hugo is faster to build but its templating gets cramped beyond a few collections, and a hand-rolled SSG means rewriting the sitemap, hreflang and image pipeline yourself. On a portfolio + blog, Astro saves the most time without giving up SEO discipline.
Can you really hand off design and implementation to Claude end-to-end?
Not "hand off" — collaborate. Claude Design produced the HTML mockups in a tight loop where I described the editorial intent ("sober, mixed serif/sans/ mono", "drop the Tweaks panel") and reviewed each iteration. Claude Code then turned the mockups into idiomatic Astro components, but I worked plan-then-execute: I asked for a task breakdown first, reviewed it, and only then let the agent run. The leverage is real (two weekends to ship the whole site), but it requires you to know what you want and to read every diff. Treat it as a senior pair, not an autopilot.
Is the SEO setup overkill for a freelance CTO portfolio?
The six building blocks (Zod schema, hreflang, JSON-LD, sitemap, llms.txt, fast static delivery) are not "overkill" — they're the floor. Each one solves a concrete failure mode: a missing `excerpt` silently broken meta descriptions, a duplicate FR/EN page tanked by Google, an article invisible to ChatGPT/Perplexity citations. Total cost: a few hours, and most of it is reusable across any future project. What's overkill is keyword-stuffing in copy, link-buying, or chasing 100/100 Lighthouse scores past 95 — none of those move the needle for a CTO site.
§ end Published on · Updated on