MuggsOfSurveys Logo

Frequently Asked Questions

Everything you need to know about MuggsOfSurveys

← Back to Login

Getting Started

MuggsOfSurveys is a survey design coach for the Data Science for Public Good curriculum. Students (called "builders") learn to design real survey instruments—the same kind used in professional research. Teachers use AI-powered assessment and peer feedback workflows to coach them.

The platform covers the full lifecycle: writing an elevator pitch, building survey items, getting AI and teacher feedback, pooling items into group surveys, collecting responses, and analyzing results.

Enter your username and 6-digit PIN on the login page. Your username format is typically the first 3 letters of your last name, a period, then the first 3 letters of your first name (e.g., mug.sea for Sean Muggivan).

Your teacher will provide your username and PIN at the start of class.

Yes. The login page has two demo options:

  • Try Demo — a student demo that uses your browser's local storage. You can build items and explore, but nothing saves to the server.
  • Teacher Demo — a read-only teacher portal loaded with mock data showing 8 student projects, AI assessments, survey sessions, and more. Write actions show a purple "Demo mode" toast instead of saving.

MuggsOfSurveys is browser-native and works on any modern browser—Chrome, Firefox, Safari, or Edge. It runs on Chromebooks, laptops, tablets, and phones. No installation or extensions required.

Yes. Click the ES button in the top-right corner of the dashboard to switch to Spanish. Click EN to switch back. Your preference is remembered between sessions.

It's a course where students learn the data science process by doing real research on topics that matter to their community. The process follows these steps:

  1. Ask Questions — identify a research focus
  2. Gather Data — design and deploy surveys (that's where MuggsOfSurveys fits in)
  3. Analyze, Model, Synthesize — work with the data in a spreadsheet tool
  4. Communicate — create visualizations and write a final paper

Surveys is the first tool in the sequence. The data students collect here flows into the spreadsheet and visualization tools later in the semester.

Building Surveys

An elevator pitch is a short (~200 word) statement describing what you're investigating and why. Think of it as your research purpose statement.

You must write a pitch before adding your first survey item to a project. This is by design—knowing your research focus helps you write better items. The AI also uses your pitch to evaluate whether each item is relevant to your stated research goal.

A clear pitch (rated 3/3 by the AI) is required for any item to earn the highest score. Without it, the maximum item score is capped at 2.

Every survey item must be classified into one of four data categories:

  • Demographic — Objective, verifiable facts about who someone is. (Grade, age, gender, neighborhood.) There's a factual answer.
  • Cognitive — Tests what someone knows. There is a correct answer. ("Which of these is the emergency number for your school?")
  • Psychographic — Measures what someone thinks, feels, or believes. There is no right or wrong answer. ("How safe do you feel walking to school?")
  • Behavioral — Captures what someone does or has done. Look for an action verb. ("How many times have you reported a safety concern?")

A common mistake: labeling opinions as cognitive (there's no "correct" opinion) or labeling hypotheticals ("Would you...") as behavioral (that's psychographic—you're asking what they think they'd do).

Eight question types are available:

  • Multiple Choice — Pick one option from a list
  • Checklist — Select all that apply
  • Likert — Agreement scale (Strongly Disagree to Strongly Agree)
  • Numerical — A number value (age, count, rating)
  • Short Answer — Brief open-ended text
  • Long Answer — Extended open-ended text
  • Ranking — Put options in order of preference (drag-and-drop)
  • Matrix — Multiple related questions with the same response scale

Each type renders differently in the survey preview and gets type-specific AI feedback about option design.

Draft means you're still working on the item. The AI acknowledges this in its feedback—it might say "You were right to mark this as a draft" or "This is actually close to ready!"

Final (submitted) means you consider the item done. Final items are eligible for the survey pool system, where they can become part of a real survey that classmates respond to.

Both draft and final items are assessed by the AI. You can switch between draft and final at any time.

Each item can have a variable name—a short, machine-readable label like school_safety_concern or grade_level. These become column headers when survey data is exported to CSV.

Variable names carry through the entire pipeline: from your item, through the survey pool, into the final data export. Professional survey tools (SPSS, R, Excel) use variable names the same way.

Each item has an optional notes field where you can explain your reasoning—why you chose this category, what you're trying to measure, or design considerations.

Notes are not visible to survey respondents, but the AI reads them when assessing your items. This gives the AI context about your intent, which is especially helpful when your reasoning differs from how the item reads on the surface.

Click the "Preview Survey" button in the header. This opens a live preview showing exactly how respondents will see your survey—Likert scales, matrices, checklists, and all other question types render in their actual format. Use it to catch issues before sharing.

AI Assessment & Feedback

Each item is scored on a 0–3 rubric:

ScoreLabelMeaning
0Not a survey item yetNeeds fundamental rework—unclear stem, wrong format, or not really a survey question.
1Keep working at itHas potential but needs significant improvement to options, wording, or classification.
2Getting thereSolid work. Minor polish needed—this is where most good items land.
3Survey readyClean, well-formed, correct category, relevant to pitch, and ready for respondents.

To earn a 3, you need: correct data category, clean writing, good response options, and a clear elevator pitch with a relevant connection to the item.

For each item, the AI provides:

  • Classification status — whether your data category label is correct or misclassified
  • Score (0–3) with a label
  • What's working — specific praise for what you did well
  • To strengthen — concrete suggestions for improvement
  • Relevance — how the item connects to your elevator pitch
  • Draft note — if the item is marked as draft, the AI comments on that

Your teacher reviews and may edit the AI feedback before you see it. If your teacher overrode the AI score or feedback, you'll see a teacher icon next to it.

Beyond individual items, the AI also evaluates your entire survey as an instrument. This holistic assessment covers:

  • Survey coherence — do items flow logically?
  • Pitch alignment — does the survey investigate what your pitch says?
  • Category coverage — balance across demographic, cognitive, psychographic, and behavioral
  • Gaps — what's missing that a researcher would need
  • Redundancies — items that overlap or ask the same thing
  • Top strength and top priority improvement

When your teacher releases project feedback, a gold "Project Feedback" button appears in your dashboard header.

Feedback follows a two-phase release process:

  1. The AI runs its assessment
  2. Your teacher reviews each item and either confirms the AI or writes an override
  3. Once all items are confirmed, the teacher releases feedback to you

You won't see any feedback until your teacher releases it. This ensures you always get reviewed, quality feedback—never raw, unreviewed AI output.

When you take a group survey (through the pool system), you write an interpretation for each item—what you think the data means. These interpretations are then assessed by the AI on a 0–3 rubric:

ScoreLabel
0No understanding
1Surface-level
2Developing
3Strong understanding

Each interpretation is scored on three sub-dimensions: data understanding, insight quality, and reasoning quality. Your teacher reviews and may edit the AI's evaluation before releasing it to you.

Survey Pool System

The survey pool system turns individual student items into real, launchable surveys. Here's how it works:

  1. You build and refine your survey items (getting AI and teacher feedback along the way)
  2. You share your project to the class pool—items that scored 2 or higher are eligible
  3. Your teacher organizes students into groups (A, B, C, etc.)
  4. Each group's qualifying items become that group's survey pool
  5. The teacher reviews pools, checks for redundancy, and publishes them
  6. Groups take each other's surveys in a circular chain (Group A takes Group B's survey, B takes C's, etc.)

This means the items you build actually get used in real data collection—your classmates respond to your survey.

Your teacher creates survey sessions tied to a project name (e.g., "DSFP"). Within a session, students are auto-assigned to groups labeled A through J (or more). Each group has 3–6 members.

Groups are linked in a circular chain: Group A responds to Group B's survey, Group B responds to Group C's, and the last group responds to Group A's. This way, everyone both creates and takes a survey.

When you share, all of your final items with a score of 2 or higher are added to your group's pool. Items scoring 0 or 1, and draft items, are not included.

Your teacher can then review the pool, edit item text, reorder items, assign them to sections, and run AI redundancy checks to remove duplicates across group members.

Once your assigned group's survey is published, it appears in your "Surveys Waiting for My Response" sidebar section. The survey is divided into sections:

  • Demographics section — fixed questions set by your teacher (always first)
  • Content sections — the pool items organized by topic

After completing each section, there's an 18-hour cooldown before you can start the next one. This is by design—it spaces out the survey-taking experience so you reflect between sections rather than rushing through everything at once.

For each item, you provide your response and write an interpretation explaining what you think the data means.

Yes. Once your group's survey is published and responses come in, you can view:

  • Completion stats — how many respondents started and finished
  • Topline report — an interactive HTML report with bar charts and summary statistics
  • Raw CSV export — a downloadable spreadsheet with all response data
  • Topline CSV — summary statistics per item

These appear in the "Surveys With My Items" section of your sidebar.

Separate from the pool system, your teacher can also assign any published student project as a survey for the whole class. These appear in your "Surveys Waiting for My Response" sidebar alongside group surveys.

This is used for things like a class-wide demographics survey or when a single student's project is strong enough to deploy to everyone.

Teacher Portal

Log in with a teacher account. In the student dashboard, click your username in the top-right corner and select "Teacher Portal" from the dropdown menu. Or navigate directly to teacher.html.

The teacher portal is only accessible to accounts with the teacher or admin role.

This is the core teacher workflow:

  1. Run AI — click "Run All" or run assessment per-student
  2. Review — expand each student and click "Review" on individual items
  3. Confirm or Override — for each item, either confirm the AI's assessment or save your own score/feedback
  4. Release — once all items are confirmed, release feedback to the student

Students never see raw AI output. They only see feedback after you release it. This gives you full control over what students receive.

Color coding helps track progress: gold border = needs review, green border = confirmed/released, orange = partially reviewed.

Every time you override an AI assessment, your correction is saved as a training example. On future AI runs, up to 8 of these examples are injected into the AI prompt as few-shot demonstrations, teaching it what a good assessment looks like.

Over time, the AI calibrates to your standards. You can view all saved training examples from the "Training Examples" link in the teacher portal sidebar.

Yes. The AI Prompt Editor (accessible from the teacher portal sidebar) lets you view, edit, and reset the AI system prompt. You can adjust rubric emphasis, feedback tone, or add domain-specific guidance. Changes take effect on the next AI run. You can reset to the default prompt at any time.

As a teacher, you:

  1. Create a session tied to a project name (e.g., "DSFP")
  2. Generate groups automatically from students who have shared projects
  3. Review pools — each group's qualifying items. Use AI redundancy check to find duplicates, edit item text, reorder, and assign sections
  4. Lock groups when you're satisfied with the composition
  5. Publish to deploy surveys to students

You can also view a grade impact report showing each student's contribution to their group pool, and a completion status grid tracking who has finished which sections.

Remote In lets you see any student's dashboard exactly as they see it—read-only. Click "Remote In" in the top bar, search for a student, and their dashboard opens in a new tab with a blue "Viewing as [student] — Read Only" banner.

This is useful for troubleshooting ("I can't see my feedback") or understanding a student's perspective without needing their login.

Several export options exist:

  • Group responses CSV — raw response data per group with variable names as column headers
  • Group interpretations — student interpretations alongside their responses
  • Teacher survey raw CSV — respondent-by-question matrix
  • Topline HTML — interactive report with bar charts, means, medians (printable/PDF-friendly)
  • Topline CSV — summary statistics: variable, question text, option, count, percent, n
  • Interpretation assessment CSV — scores and feedback for student interpretations

All exports preserve variable names from the original items.

Demographic items are teacher-authored questions that appear as a fixed first section on every group survey. They include things like grade level, gender, and period—standard background variables needed for data analysis.

You can manage demographic items from the teacher portal: create, edit, reorder, or seed a starter set. Students see these as a read-only "Course Demographics Survey" section.