AI Coding Beginner Project 8 of 11

Build an AI Tool Recommendation Quiz with AI

Use Claude or ChatGPT to build an interactive quiz that recommends the best AI coding tool for each user. You'll learn how to prompt AI tools to create multi-step forms with scoring logic and personalized results.

What You'll Build

In this project, you'll use an AI assistant (Claude, ChatGPT, or any tool you prefer) to create a browser-based recommendation quiz. The finished tool will let you:

  • Walk users through 8 questions about their coding workflow and preferences
  • Show a progress bar that fills as users advance through questions
  • Calculate scores for Cursor, GitHub Copilot, and Claude Code based on answers
  • Display a personalized results page with the top recommendation and comparison cards
  • Share results or restart the quiz with a single click

What You'll Need

An AI Chat Tool

Claude, ChatGPT, Gemini, or any AI assistant that can generate code. Free tiers work fine.

A Text Editor

VS Code, Notepad, TextEdit — anything that lets you save an .html file.

No coding experience needed

This project is designed for complete beginners. The AI does the heavy coding — you learn how to guide it.

1

Describe the Quiz UI with Progress Bar

Open your AI tool and start a new conversation. The quiz needs a clear, step-by-step interface that shows one question at a time. Here's a prompt that works well:

Prompt to copy

Build me a single HTML file for an AI coding tool recommendation quiz. It should have:

- A welcome screen with a title "Which AI Coding Tool Should You Use?", a brief description, and a "Start Quiz" button
- 8 multiple-choice questions shown one at a time (not all at once)
- A progress bar at the top that fills as the user advances (e.g., "Question 3 of 8" with a visual bar)
- Each question has 3-4 answer options displayed as clickable cards (not radio buttons)
- When you click an answer, it highlights briefly, then auto-advances to the next question after a short delay
- A "Previous" button to go back and change answers
- Start with 3 placeholder questions for now — we'll add the real ones next
- Use a dark theme with clean, modern styling

Put all HTML, CSS, and JavaScript in one file. No external dependencies besides Tailwind CSS via CDN.

The key design decision here is showing one question at a time instead of all 8 on one page. This creates a focused, app-like experience and makes the progress bar meaningful.

2

Design Questions and Scoring Logic

Now replace the placeholder questions with real ones that map to specific tool recommendations. Each answer should add points to one or more tools:

Prompt to copy

Replace the placeholder questions with these 8 real questions. Each answer should add points to one or more tools: Cursor, GitHub Copilot, or Claude Code. Store scores as an object like { cursor: 0, copilot: 0, claude: 0 }.

Questions:
1. "What's your primary IDE?" — Options: VS Code (+2 Cursor, +1 Copilot), JetBrains IDE (+2 Copilot), Terminal/Vim (+2 Claude Code), Multiple/No preference (+1 each)
2. "How do you prefer to interact with AI?" — Options: Inline autocomplete as I type (+2 Copilot), Chat panel alongside my code (+2 Cursor), Command line conversations (+2 Claude Code), I'm not sure yet (+1 each)
3. "What size projects do you usually work on?" — Options: Small scripts and utilities (+1 Copilot, +1 Claude Code), Medium apps with 10-50 files (+2 Cursor), Large codebases with 100+ files (+2 Claude Code), Varies a lot (+1 each)
4. "How important is multi-file editing?" — Options: Essential — I need to edit across files at once (+2 Cursor, +1 Claude Code), Nice to have but not critical (+1 each), I mostly work in one file at a time (+2 Copilot)
5. "What's your experience level?" — Options: Beginner — learning to code (+2 Claude Code), Intermediate — comfortable with basics (+1 Cursor, +1 Copilot), Advanced — I want speed not hand-holding (+2 Copilot), I manage more than I code (+2 Claude Code)
6. "Do you need to run terminal commands through AI?" — Options: Yes, that would save me tons of time (+2 Claude Code), Sometimes for deployment or testing (+1 Cursor, +1 Claude Code), Not really, I handle the terminal myself (+2 Copilot, +1 Cursor)
7. "How do you feel about AI making changes automatically?" — Options: Love it — let the AI drive (+2 Cursor, +1 Claude Code), It's fine with a review step (+1 each), I want full control over every change (+2 Copilot)
8. "What matters most to you?" — Options: Speed and autocomplete (+2 Copilot), Understanding my whole project context (+2 Claude Code), Visual UI and ease of use (+2 Cursor), Cost — I want the best free option (+1 Copilot, +1 Claude Code)

Keep the same UI structure, dark theme, and progress bar.

The scoring system is what makes the quiz actually useful. Each answer nudges the score toward one or more tools, and the tool with the highest total score at the end wins.

3

Build the Results Page

The results page is where the quiz delivers its value. It needs to show the winning tool prominently, plus a comparison of all three tools so users can make an informed choice:

Prompt to copy

Build a results page that shows after the last question. It should include:

- A large heading: "Your Recommended Tool" with the winning tool name displayed prominently
- A score bar for each tool showing its percentage of total points (e.g., "Cursor: 65%", "Copilot: 20%", "Claude Code: 15%") — use colored progress bars
- A "Why This Tool?" paragraph that explains why the winning tool matches the user's answers. Write 2-3 sentences for each possible winner:
- Cursor: great for visual learners who want AI integrated into a polished IDE with multi-file editing
- Copilot: ideal for developers who want fast inline autocomplete without leaving their typing flow
- Claude Code: best for terminal-first users who work on large codebases and want deep project understanding
- Below that, show 3 comparison cards (one per tool) with: tool name, a one-line tagline, 3 key strengths as bullet points, pricing info, and a "Learn More" link (use # for the href)
- If two tools are tied, show a "It's a tie!" message and display both tools as recommendations

Keep the dark theme and same animation style.

Here are common issues and how to ask the AI to fix them:

Score percentages don't add to 100%

Tell the AI: "The score percentages are wrong — they should be calculated as each tool's score divided by the total of all scores, then multiplied by 100. Make sure they always add up to 100%."

Going back changes the score incorrectly

Tell the AI: "When I go back to a previous question and change my answer, the old answer's points aren't removed. Subtract the previous answer's scores before adding the new answer's scores."

Results page shows before all questions answered

Tell the AI: "The results page shows up before I've answered all 8 questions. Only show results after question 8 is answered — check that the answers array has exactly 8 entries."

4

Add Tool Comparison Cards

Make the comparison cards more detailed and visually distinct so users can easily compare the three tools side by side:

Enhancement prompt

Enhance the tool comparison cards on the results page:

- Give each tool a distinct accent color: Cursor = blue, Copilot = gray/white, Claude Code = orange
- Add a "Best For" label on each card (e.g., Cursor: "Best for visual IDE users", Copilot: "Best for inline autocomplete", Claude Code: "Best for terminal and large projects")
- Include these details on each card:
- Cursor: Built on VS Code, supports multi-file editing, has a Composer feature for large changes, free tier available, $20/mo for Pro
- GitHub Copilot: Works in any IDE, fastest autocomplete, great for line-by-line suggestions, free for students, $10/mo individual
- Claude Code: Terminal-based, understands entire codebases, can run commands and edit files, pay-per-use via API, best for complex refactors
- Highlight the winning tool's card with a glowing border and a "Recommended for you" badge
- Sort the cards so the recommended tool appears first

Keep the dark theme and existing quiz flow.

The comparison cards serve double duty: they validate the quiz recommendation and educate users about tools they might not have heard of. Even if someone disagrees with the result, they leave knowing what each tool does.

5

Add Share and Restart Features

Finally, add share functionality and the ability to retake the quiz. These small features make the tool feel complete and encourage users to share it:

Polish prompt

Add share and restart features to the quiz results page:

- A "Retake Quiz" button that resets all scores to zero, clears all stored answers, and goes back to the welcome screen
- A "Copy Result" button that copies text like "I got Cursor as my recommended AI coding tool! Take the quiz:" followed by a placeholder URL
- A "Share on Twitter" button that opens a pre-filled tweet in a new tab
- Add smooth transitions between quiz states: fade the current question out and the next one in
- Add a subtle animation on the results page — maybe the score bars animate from 0 to their final percentage
- On the welcome screen, add a note: "Takes about 2 minutes. 8 quick questions."

Keep everything in the same single HTML file with the dark theme.

What You Learned

This project teaches more than just quiz building. By completing it, you practiced:

Quiz Logic and Scoring Systems

You learned how weighted scoring works — each answer contributes points, and the highest score determines the recommendation.

Multi-Step Form UIs

You built a wizard-style interface with progress tracking, back navigation, and state management across multiple screens.

Data-Driven Recommendations

You created a system where data (user answers) drives a personalized output — the foundation of recommendation engines.

AI Coding Tool Differences

You now understand the strengths and tradeoffs of Cursor, Copilot, and Claude Code — knowledge that helps you pick the right tool for each job.

Tips for Working with AI on This Project

1.

Build the UI first, scoring second. Get the question flow and progress bar working before worrying about the scoring logic. It's easier to debug when you can see what's happening.

2.

Test the back button carefully. The trickiest part of any quiz is going back and changing answers without corrupting the score. Test this multiple times.

3.

Keep question data separate from UI code. Ask the AI to store questions in a JavaScript array of objects — this makes it easy to add, edit, or reorder questions without touching the UI code.

4.

Verify with manual math. Pick specific answers, calculate the expected score by hand, then take the quiz with those exact answers. If the results don't match, tell the AI which calculation is wrong.

Ready for your next project?

Explore more hands-on projects, or check out the tutorials for deeper dives into specific AI tools.