Percept

Percept helps people with disabilities and neurodivergent users get clearer answers from AI. It reviews your prompt, flags spots that may confuse the model, and suggests small edits to improve clarity and tone. You can choose a profile like ADHD, screenreader, low vision, dyslexia or motor disability to see guidance that fits real use. The aim is simple: make the tech work for you, not against you. Built with plain HTML, CSS and JavaScript, Percept runs in the browser with no extras.

About This Project

Percept lets you test the tone of a written input (like a UX message or help text) against various reader profiles. It’s designed to help developers and content designers better understand how language lands across different use cases — from ADHD profiles to executive stakeholders.

The goal: create more inclusive, intentional communication — especially for AI-generated or templated content where tone drift can occur.

How It Was Built

Built with vanilla HTML, CSS and JavaScript, Percept uses localStorage to remember session state. It features profile selectors, toggleable tone preview, and session recovery. Data is pulled from static JSON files, which represent different tone perspectives.

This tool was also used to practice DOM manipulation, GitHub deployment and UI clarity.

Tech Stack: HTML5, CSS3, JavaScript, JSON, GitHub Pages

FAQ

Q: Is this an AI tool?
A: No. It’s a static frontend app meant to help humans preview tone using pre-written feedback profiles.

Q: Can I add my own tone or persona?
A: Not yet, but the JSON structure makes it easy to extend.