Everything you need to know about MuggsOfSurveys
MuggsOfSurveys is a survey design coach for the Data Science for Public Good curriculum. Students (called "builders") learn to design real survey instruments—the same kind used in professional research. Teachers use AI-powered assessment and peer feedback workflows to coach them.
The platform covers the full lifecycle: writing an elevator pitch, building survey items, getting AI and teacher feedback, pooling items into group surveys, collecting responses, and analyzing results.
Enter your username and 6-digit PIN on the login page. Your username format is typically the first 3 letters of your last name, a period, then the first 3 letters of your first name (e.g., mug.sea for Sean Muggivan).
Your teacher will provide your username and PIN at the start of class.
Yes. The login page has two demo options:
MuggsOfSurveys is browser-native and works on any modern browser—Chrome, Firefox, Safari, or Edge. It runs on Chromebooks, laptops, tablets, and phones. No installation or extensions required.
Yes. Click the ES button in the top-right corner of the dashboard to switch to Spanish. Click EN to switch back. Your preference is remembered between sessions.
It's a course where students learn the data science process by doing real research on topics that matter to their community. The process follows these steps:
Surveys is the first tool in the sequence. The data students collect here flows into the spreadsheet and visualization tools later in the semester.
An elevator pitch is a short (~200 word) statement describing what you're investigating and why. Think of it as your research purpose statement.
You must write a pitch before adding your first survey item to a project. This is by design—knowing your research focus helps you write better items. The AI also uses your pitch to evaluate whether each item is relevant to your stated research goal.
A clear pitch (rated 3/3 by the AI) is required for any item to earn the highest score. Without it, the maximum item score is capped at 2.
Every survey item must be classified into one of four data categories:
A common mistake: labeling opinions as cognitive (there's no "correct" opinion) or labeling hypotheticals ("Would you...") as behavioral (that's psychographic—you're asking what they think they'd do).
Eight question types are available:
Each type renders differently in the survey preview and gets type-specific AI feedback about option design.
Draft means you're still working on the item. The AI acknowledges this in its feedback—it might say "You were right to mark this as a draft" or "This is actually close to ready!"
Final (submitted) means you consider the item done. Final items are eligible for the survey pool system, where they can become part of a real survey that classmates respond to.
Both draft and final items are assessed by the AI. You can switch between draft and final at any time.
Each item can have a variable name—a short, machine-readable label like school_safety_concern or grade_level. These become column headers when survey data is exported to CSV.
Variable names carry through the entire pipeline: from your item, through the survey pool, into the final data export. Professional survey tools (SPSS, R, Excel) use variable names the same way.
Each item has an optional notes field where you can explain your reasoning—why you chose this category, what you're trying to measure, or design considerations.
Notes are not visible to survey respondents, but the AI reads them when assessing your items. This gives the AI context about your intent, which is especially helpful when your reasoning differs from how the item reads on the surface.
Click the "Preview Survey" button in the header. This opens a live preview showing exactly how respondents will see your survey—Likert scales, matrices, checklists, and all other question types render in their actual format. Use it to catch issues before sharing.
Each item is scored on a 0–3 rubric:
| Score | Label | Meaning |
|---|---|---|
| 0 | Not a survey item yet | Needs fundamental rework—unclear stem, wrong format, or not really a survey question. |
| 1 | Keep working at it | Has potential but needs significant improvement to options, wording, or classification. |
| 2 | Getting there | Solid work. Minor polish needed—this is where most good items land. |
| 3 | Survey ready | Clean, well-formed, correct category, relevant to pitch, and ready for respondents. |
To earn a 3, you need: correct data category, clean writing, good response options, and a clear elevator pitch with a relevant connection to the item.
For each item, the AI provides:
Your teacher reviews and may edit the AI feedback before you see it. If your teacher overrode the AI score or feedback, you'll see a teacher icon next to it.
Beyond individual items, the AI also evaluates your entire survey as an instrument. This holistic assessment covers:
When your teacher releases project feedback, a gold "Project Feedback" button appears in your dashboard header.
Feedback follows a two-phase release process:
You won't see any feedback until your teacher releases it. This ensures you always get reviewed, quality feedback—never raw, unreviewed AI output.
When you take a group survey (through the pool system), you write an interpretation for each item—what you think the data means. These interpretations are then assessed by the AI on a 0–3 rubric:
| Score | Label |
|---|---|
| 0 | No understanding |
| 1 | Surface-level |
| 2 | Developing |
| 3 | Strong understanding |
Each interpretation is scored on three sub-dimensions: data understanding, insight quality, and reasoning quality. Your teacher reviews and may edit the AI's evaluation before releasing it to you.
The survey pool system turns individual student items into real, launchable surveys. Here's how it works:
This means the items you build actually get used in real data collection—your classmates respond to your survey.
Your teacher creates survey sessions tied to a project name (e.g., "DSFP"). Within a session, students are auto-assigned to groups labeled A through J (or more). Each group has 3–6 members.
Groups are linked in a circular chain: Group A responds to Group B's survey, Group B responds to Group C's, and the last group responds to Group A's. This way, everyone both creates and takes a survey.
When you share, all of your final items with a score of 2 or higher are added to your group's pool. Items scoring 0 or 1, and draft items, are not included.
Your teacher can then review the pool, edit item text, reorder items, assign them to sections, and run AI redundancy checks to remove duplicates across group members.
Once your assigned group's survey is published, it appears in your "Surveys Waiting for My Response" sidebar section. The survey is divided into sections:
After completing each section, there's an 18-hour cooldown before you can start the next one. This is by design—it spaces out the survey-taking experience so you reflect between sections rather than rushing through everything at once.
For each item, you provide your response and write an interpretation explaining what you think the data means.
Yes. Once your group's survey is published and responses come in, you can view:
These appear in the "Surveys With My Items" section of your sidebar.
Separate from the pool system, your teacher can also assign any published student project as a survey for the whole class. These appear in your "Surveys Waiting for My Response" sidebar alongside group surveys.
This is used for things like a class-wide demographics survey or when a single student's project is strong enough to deploy to everyone.
Log in with a teacher account. In the student dashboard, click your username in the top-right corner and select "Teacher Portal" from the dropdown menu. Or navigate directly to teacher.html.
The teacher portal is only accessible to accounts with the teacher or admin role.
This is the core teacher workflow:
Students never see raw AI output. They only see feedback after you release it. This gives you full control over what students receive.
Color coding helps track progress: gold border = needs review, green border = confirmed/released, orange = partially reviewed.
Every time you override an AI assessment, your correction is saved as a training example. On future AI runs, up to 8 of these examples are injected into the AI prompt as few-shot demonstrations, teaching it what a good assessment looks like.
Over time, the AI calibrates to your standards. You can view all saved training examples from the "Training Examples" link in the teacher portal sidebar.
Yes. The AI Prompt Editor (accessible from the teacher portal sidebar) lets you view, edit, and reset the AI system prompt. You can adjust rubric emphasis, feedback tone, or add domain-specific guidance. Changes take effect on the next AI run. You can reset to the default prompt at any time.
As a teacher, you:
You can also view a grade impact report showing each student's contribution to their group pool, and a completion status grid tracking who has finished which sections.
Remote In lets you see any student's dashboard exactly as they see it—read-only. Click "Remote In" in the top bar, search for a student, and their dashboard opens in a new tab with a blue "Viewing as [student] — Read Only" banner.
This is useful for troubleshooting ("I can't see my feedback") or understanding a student's perspective without needing their login.
Several export options exist:
All exports preserve variable names from the original items.
Demographic items are teacher-authored questions that appear as a fixed first section on every group survey. They include things like grade level, gender, and period—standard background variables needed for data analysis.
You can manage demographic items from the teacher portal: create, edit, reorder, or seed a starter set. Students see these as a read-only "Course Demographics Survey" section.