Prompting Beginner Project 2 of 11

Build a Prompt Scoring Tool with AI

Use Claude or ChatGPT to build a tool that analyzes and scores AI prompts across five quality criteria. You'll learn what makes a great prompt while building the tool that teaches it.

What You'll Build

In this project, you'll use an AI assistant (Claude, ChatGPT, or any tool you prefer) to create a browser-based prompt scoring tool. The finished tool will let you:

  • Paste any AI prompt and get an instant quality score from 0 to 100
  • See individual scores across five criteria: Clarity, Specificity, Context, Constraints, and Output Format
  • Get actionable tips on how to improve weak areas of your prompt
  • Compare a "before" and "after" version of a prompt side by side
  • View score breakdowns with visual progress bars that update in real time

What You'll Need

An AI Chat Tool

Claude, ChatGPT, Gemini, or any AI assistant that can generate code. Free tiers work fine.

A Text Editor

VS Code, Notepad, TextEdit — anything that lets you save an .html file.

No coding experience needed

This project is designed for complete beginners. The AI does the heavy coding — you learn how to guide it.

1

Define the Scoring Criteria

Open your AI tool and start a new conversation. The key to this project is defining what "good" means for a prompt. You'll tell the AI exactly which criteria to score on and how to evaluate each one. Here's a prompt that works well:

Prompt to copy

Build me a single HTML file with a prompt scoring tool. The user pastes an AI prompt into a textarea, clicks "Analyze", and gets scored on 5 criteria:

1. Clarity (0-20): Does the prompt clearly state what it wants? Check for a direct request, absence of vague words like "something" or "stuff", and simple sentence structure.
2. Specificity (0-20): Does it include specific details? Check for numbers, named technologies, concrete examples, or measurable outcomes.
3. Context (0-20): Does it provide background information? Check for role descriptions, audience mentions, or situation context.
4. Constraints (0-20): Does it set boundaries? Check for word counts, format requirements, tone instructions, or things to avoid.
5. Output Format (0-20): Does it describe the desired output? Check for mentions of format (list, paragraph, code), length, or structure.

The total score should be 0-100. Display each criterion with a label, score, and a horizontal progress bar. Color the bars green (16-20), yellow (10-15), or red (0-9). Show the total score in a large circle at the top.

Use a dark theme, Tailwind CSS via CDN, all in one file.

Notice how this prompt defines the scoring algorithm explicitly. If you just said "score prompts on quality," the AI would have to guess what quality means. By spelling out each criterion and its point range, you get predictable, useful results.

2

Build the Input and Output UI

The AI will generate a working tool, but the first version might look rough. Save the code and test it in your browser. Then refine the UI with a follow-up prompt:

Prompt to copy

Improve the layout of the prompt scoring tool:

- Make the textarea large (at least 6 rows) with a placeholder that says "Paste your AI prompt here..."
- Put the "Analyze" button below the textarea, full width, with a gradient background
- Show the results section below the button. Hide it until the user clicks Analyze
- The total score circle should be centered, large (120px), with a thick colored border that matches the score level (green for 70+, yellow for 40-69, red for below 40)
- Below the circle, show each of the 5 criteria as a row: label on the left, score number on the right, and a colored progress bar filling proportionally between them
- Add a character count below the textarea that shows how many characters the prompt has
- Add a "Clear" button next to Analyze that resets the textarea and hides results

Keep everything in one file with the same dark theme.

What to look for when testing:

  • Does the results section stay hidden until you click Analyze?
  • Do the progress bars fill to the correct width for each score?
  • Does the total score circle change color based on the score?
  • Does the Clear button reset everything properly?
  • Try pasting a very short prompt like "write something" — does it score low?
3

Add Real-Time Scoring and Tips

The tool is more useful if it scores as you type instead of waiting for a button click. You'll also want it to explain why each score is what it is, and suggest improvements. Here's how to ask for that:

Enhancement prompt

Upgrade the prompt scorer with these features:

- Score in real time as the user types (debounce the input by 300ms so it doesn't run on every keystroke)
- Below each criterion's progress bar, show a one-sentence tip when the score is below 15. For example:
- Clarity tip: "Try starting with a clear action verb like 'Write', 'Create', 'Explain', or 'List'."
- Specificity tip: "Add specific details — mention exact numbers, tools, technologies, or names."
- Context tip: "Tell the AI who you are or who the audience is. Example: 'I'm a beginner learning Python.'"
- Constraints tip: "Set boundaries — specify word count, tone (formal/casual), or things to avoid."
- Output Format tip: "Describe what the output should look like — a list, code block, table, or paragraph."
- Show the tips in a subtle yellow/amber color so they stand out from the scores
- Keep the Analyze button but also run scoring on input change
- When all 5 criteria score 15 or above, show a green "Great prompt!" banner above the results

Same file, same dark theme, same structure.

Here are common issues and how to ask the AI to fix them:

Scoring feels random or inaccurate

Tell the AI: "The Specificity score gives 20/20 for 'write me something about dogs' which has no specifics. The scoring function needs to check for concrete details like numbers, proper nouns, or technical terms. Only give high scores when those are actually present."

Real-time scoring lags or freezes

Tell the AI: "The real-time scoring runs too frequently and makes the page laggy. Add a debounce of 300ms using setTimeout and clearTimeout so it only scores after the user stops typing."

Progress bars don't animate

Tell the AI: "The progress bars jump to their final width instantly. Add a CSS transition on the width property with a 0.3s ease duration so they animate smoothly when the score changes."

4

Add Before/After Comparison

One of the best ways to learn prompting is to see the difference between a weak prompt and a strong one side by side. Ask the AI to add a comparison mode:

Enhancement prompt

Add a "Compare Mode" to the prompt scoring tool:

- Add a toggle button at the top that switches between "Single" and "Compare" mode
- In Compare mode, show two textareas side by side (or stacked on mobile): one labeled "Before" and one labeled "After"
- Both get scored independently using the same 5 criteria
- Show the results for each below their respective textarea, with the total scores displayed prominently
- Between the two result sections, show the score difference with an arrow (e.g., "42 → 87 (+45)") colored green if the After score is higher
- Add 3 pre-loaded example pairs that users can click to see a before/after comparison:
1. "Write about dogs" vs. a detailed version with specificity and constraints
2. "Make me a website" vs. a version with tech stack, layout, and design details
3. "Explain AI" vs. a version with audience context, format, and depth constraints

Keep everything in the same file. Same dark theme.

The compare mode is a powerful teaching tool. It lets users see exactly how adding specificity, context, and constraints changes a score — making the abstract concept of "prompt quality" concrete and measurable.

5

Polish and Add Suggestions

Once the core features work, make the tool look professional and add an automatic improvement suggestion feature:

Design prompt

Add polish and a suggestion engine to the prompt scorer:

- Add a "Suggestions" section below the scores that generates 2-3 specific rewrite suggestions based on the lowest-scoring criteria. These should be full sentence suggestions the user could add to their prompt, not generic advice. For example, if Context scores low, suggest: "Try adding: 'I'm writing this for a technical blog aimed at junior developers.'"
- Animate the score circle — it should count up from 0 to the final score over 0.5 seconds
- Add smooth fade-in animations on the results section when it appears
- Show a small emoji next to each criterion: a green checkmark for 15+, a yellow warning for 10-14, a red X for below 10
- Add a "Copy improved prompt" button that copies the original prompt plus all the suggestions appended at the end
- The progress bars should animate their width from 0 to the score value

Same file, same dark theme.

What You Learned

This project teaches more than just building a tool. By completing it, you practiced:

Prompt Quality Criteria

You now understand the five pillars of a strong prompt — clarity, specificity, context, constraints, and output format — and can evaluate any prompt against them.

Building Analysis Tools with AI

You learned how to describe scoring algorithms in plain English and have an AI implement the logic, including keyword matching and rule-based evaluation.

Real-Time DOM Updates

You guided the AI to build debounced input handling, dynamic progress bars, and animated score displays — core patterns for any interactive web tool.

Prompt Engineering Fundamentals

By building a tool that teaches prompting, you internalized the principles yourself. You can now write better prompts for any AI tool you use.

Tips for Working with AI on This Project

1.

Test with real prompts. Try pasting actual prompts you've used with ChatGPT or Claude. If the tool gives a bad prompt a high score, tell the AI to tighten the scoring logic.

2.

Paste error messages. If something breaks, open the browser console (F12) and paste the error into the chat. The AI can usually fix it immediately.

3.

Calibrate the scoring. After the first version works, spend time testing edge cases. A one-word prompt should score near 0. A detailed, well-structured prompt should score near 100. If the range feels off, tell the AI to adjust.

4.

Don't worry about understanding every line. Focus on the behavior — what it does — not every syntax detail. Understanding comes with practice.

Ready for your next project?

Explore more hands-on projects, or check out the tutorials for deeper dives into specific AI tools.