Skip to content

Assignments

The Capstone Series assessment focuses on evaluating the six facets that, in our opinion, define a software engineer. For better or worse, we’re also bound by the ABET (Accreditation Board for Engineering and Technology), WIC (Writing Intensive Curriculum), and Beyond OSU (program-level) learning outcomes.

In professional life, writing will help students become better engineers. It is hard for us to assess individual contributions to written documentation, so we had to shape a few individual writing assignments and team assignment with individual contributions.

Here are all the assignments we believe are relevant for the team to successfully complete the project. The handbook contains many activities that can help determine the contents of these assignments. If the team is unsure where to begin, browse activities.

All assignments below apply to every term. We encourage an iterative approach: start simple, get feedback, and improve over time.

An evidence-based update: planned vs done, working software proof (URL + commit + CI—Continuous Integration), top risks/quality signals, next goals, brief team reflection, and linked individual contributions.

What is a sprint? A sprint is a fixed-length iteration where the team plans, builds, and reviews a small, shippable slice of value. Sprints are two weeks long. The team will submit a sprint report at the end of each sprint, for a total of 14 reports over the academic year.

Define how the team will work (roles, decisions, cadence/tools), a restorative conflict path, Definition of Done (tests, reviews, static checks, docs, CI), and inclusion norms; publish CONTRIBUTING.md and show evidence of individual contributions.

Align on scope and “what good looks like.” The team creates an ID’d, atomic, testable set with non-functional requirements (NFRs), acceptance criteria (AC) for priorities, an explicit prioritization method, and traceability to risks/constraints.

Interview the project stakeholder(s) and turn it into a one-page internal memo: problem statement, initial constraints/acceptance criteria, IP—Intellectual Property/licensing/data-use notes, and expectations.

Learn a narrow, high-leverage slice (tech, standard, user, competitor) and synthesize credible sources into key findings and actionable implications for the team.

Write an Architecture Decision Record (ADR) that documents a real design decision (context, options including “do nothing,” decision, consequences) with embedded evidence, and perform a substantive code review covering correctness, design, tests, and security, with proof of follow-through.

Share a coherent system view (context + diagrams), concrete interfaces/contracts (examples, errors, auth/authorization + authentication, versioning), and a ranked risk register; provide evidence artifact and ADR links.

At the mid and end of every term, the project partner or mentor will receive a survey asking them to grade the team. See the details and criteria here..

Project partner evaluations account for 25% of each student’s total grade each term. The evaluation approach varies by term:

  • Fall & Winter Terms: Progress-focused assessment using adapted rubrics that evaluate development milestones, learning growth, and team collaboration.
  • Spring Term: Final outcome-focused assessment using complete rubrics that evaluate project deliverables and final results.

The midterm survey is a lightweight “pulse” check to ensure that the team is on track. The end-of-term survey is more comprehensive and will significantly impact each student’s grade.

They will grade the team as a whole, but they will have the option to add notes as needed. The team must ensure the project partner is up to date on progress and aligned on expectations. Review all artifacts for the term (including the demo) with them.

The facet distribution for project partner evaluations can be found on the grade distribution page.

At the mid and end of each term, students evaluate their teammates on contributions to the team’s success. This feedback is important for fostering a productive and collaborative work environment. Students receive a survey from the instruction team asking them to grade each team member.

Peer evaluations account for 25% of each student’s total grade each term, making them a significant component alongside project partner evaluations (25%) and assignment-specific rubrics (50%).

The midterm survey is worth less than the end-of-term survey, giving time to adjust based on feedback and team dynamics. They are otherwise similar in structure and content.

Create a 10-12 minute recorded update for an external audience that states problem→approach, shows a working path or credible walkthrough, credits third-party assets, and notes known limits; every student speaks.

Tell the story of the term via 2-3 themes covering top successes and pain points, support with 1-3 crisp artifacts, and commit to three actions with owners/timeframes/recognition cues; include an individual page per student (wins/gaps, commitments, one risk+mitigation).