Skip to content

Project Partner Evaluation

This page covers project partner evaluations, which account for 25% of student’s total grade. Project partners will assess their team(s) once or twice each term:

  1. A short Midterm Evaluation (0-5% of total grade) focusing on responsiveness, professionalism, and delivery quality.
  2. The Final Evaluation (20-25% of total grade) uses the six facets described below to assess overall project progress and outcomes.

Project partner assessments use different approaches depending on the term. In Fall and Winter, the evaluation is progress-focused using adapted criteria that assess development milestones and learning growth. In Spring, the evaluation shifts to an outcome-focused approach that assesses final deliverables and project outcomes. Each facet will have a different weight depending on the term, see the grade distribution.

Students should discuss expectations with their project partner or mentor at the beginning of each term and review progress regularly.

The evaluation will be colletected through survey sent to each project partner. The end-of-term survey will include the six facets below, each with its own rubric.

The purpose of the Reflection facet is to evaluate your ability to critically analyze learnings and project experiences.

PointsCriteria
100Demonstrates clear learning progression and applies insights to improve team processes. Regular reflection evident.
90Shows good learning progression with some application of insights. Regular reflection with minor gaps.
80Adequate learning progression. Basic reflection on experiences with some insights.
70Limited learning progression. Minimal reflection with general observations.
50No clear learning progression. Lacks meaningful reflection.

The purpose of the Requirements and Specifications facet is to evaluate your ability to gather, document, and prioritize project requirements.

PointsCriteria
100Requirements gathering shows clear progress and stakeholder engagement. Well-documented evolution of understanding.
90Good progress in requirements gathering. Most stakeholder needs identified with documentation improving.
80Adequate progress in requirements. Basic stakeholder engagement with some documentation.
70Limited progress in requirements gathering. Minimal stakeholder engagement.
50No clear progress in requirements. Stakeholder needs remain unclear.

Stakeholders refers to the project partners, which can be faculty members, students, industry partners, or else. Make sure that the goals of the project are stated, clear, and measurable. We can’t evaluate requirements otherwise.

This rubric is very broad and we don’t expect to hit a hundred points before Winter or Spring. Fall term does not put a lot of weight on this rubric, but your goal is still to get the implementation and deployment under way to hit the ground running in Winter.

PointsCriteria
100Significant progress in design and implementation. Clear development milestones achieved with good documentation.
90Good progress with most milestones met. Implementation advancing steadily with adequate documentation.
80Adequate progress with some milestones met. Basic implementation progress.
70Limited progress. Few milestones achieved with minimal implementation.
50No clear progress. Implementation has not advanced meaningfully.

We use the term “deployment” very loosely here. If you have a research project, this means that your code or artifacts are well-documented (think reusable/reproducible). If you have a FOSS project, it means your patches include all the necessary changes (incl. to documentation) and successfully works within the existing codebase with no regression.

The Verification and Validation facet looks at the outcome of your Capstone project, not just the output.

In Spring, the grading scale is different depending on the project category. If your project does not fit any category or if your project partner has other expectations, they will be able to provide a custom grade on the scale [50,100]. Please set expectations with your project partner at the beginning of each term.

PointsCriteria
100Clear testing and validation strategy in place. Regular feedback collection and incorporation.
90Good validation approach developing. Some testing and feedback collection evident.
80Basic validation planning. Limited testing or feedback collection.
70Minimal validation planning. Little evidence of testing approach.
50No clear validation strategy. No testing or feedback collection.

This category involves contributing patches to an existing Free and Open-Source Software project. Examples include writing patches for the Rust compiler, the Xen hypervisor, the Habitica todo-list game, or the OSU Open Source Lab repositories.

PointsCriteria
100Patches accepted, positive mentions in press/release.
90Patches accepted.
85Patches accepted but then reverted due to bug/issue.
80Patches submitted and reviewed.
75Patches submitted.
70Patch appears to work on student computers.
50Patch is vaporware.

Collaborate with a professor on a research topic, aiming to publish a small paper with your findings.

PointsCriteria
100Novel results or context or paper published, etc.
90Prototype works on a wide range of reasonable inputs and some challenging ones.
80Prototype works on reasonable inputs.
70Prototype works on trivial inputs.
50Prototype is vaporware.

Develop software for a specific external project partner.

PointsCriteria
100System is in production and is public-facing or part of critical operations.
90Project partner is actively working to integrate system into production, and system is public-facing or part of critical operations.
80Project partner feedback on an earlier prototype; concerns have been addressed in newer version.
70Project partner feedback on an earlier prototype.
50System diverges significantly from Project partner requirements; project partner does not intend to use the system; team has stopped speaking to project partner.

This category involves creating a new product or game, which may or may not become a viable business.

PointsCriteria
100Hundreds of light users or tens of heavy users or positive mention in mainstream/industry press or winning a reputable startup/gaming pitch competition.
90Two dozens users you don’t know or rigorous user study.
80A dozen users you don’t know or user study.
70Friends have tried your software.
50No users, nor user testing.

Capstone is fundamentally a team effort.

PointsCriteria
100Excels in shared leadership, seamless collaboration, proactive conflict resolution, and innovative contributions exceeding goals, adapting effectively to participation variances.
90Achieves goals via robust collaboration, balanced leadership, and constructive resolution, driven by active contributions.
80Meets core goals with adequate collaboration and basic leadership/conflict management, sustaining momentum despite uneven participation.
70Achieves partial goals with some cohesion, but limited by unresolved issues or workload inefficiencies.
50Fails goals due to poor collaboration, absent leadership, or unmanaged conflicts overwhelming adaptation.

In addition to the general rubric for teamwork, we’re also using the Comprehensive Assessment of Team Member Effectiveness (CATME). The CATME Five Teamwork Dimensions will only be used for intra-groups peer reviews.

The Communication facet evaluates sudents’ ability to effectively convey ideas, progress, and outcomes through various means such as email updates, presentations, documentation, and discussions.

PointsCriteria
100Clear, concise, and effective communication. Excellent presentations and well-structured documentation.
90Good communication skills. Effective presentations and documentation with minor issues.
80Adequate communication. Presentations and documentation are clear but may lack polish.
70Basic communication skills. Presentations and documentation are understandable but with notable issues.
50Poor communication. Presentations and documentation are unclear or ineffective.

Project partner evaluations are primarily team-level assessments. However, if a project partner explicitly flags that a specific student did not contribute meaningfully to the project, the instructor may apply individual grade adjustments to that student’s project partner score.

When a project partner flags an individual student, the instructor will cross-reference that signal against the following:

SourceWhat it shows
Peer evaluation scores and commentsTeammates’ direct assessment of contribution, work quality, and attitude
Individual work log sections in progress reportsSelf-reported and team-reported sprint-by-sprint contributions with links
Repository activityCommits, pull requests, code reviews, and issue ownership
Assignment attributionsAuthorship of individual sections in submitted documents

A flag that is corroborated by two or more of these sources is treated as strong evidence of non-contribution. A flag that is not corroborated may still result in a minor adjustment but will be given more benefit of the doubt, since project partners do not always have visibility into all aspects of individual work.

Adjustments are applied at the facet level, based on what the corroborated evidence shows about the student’s individual contribution to each area. All six facets are susceptible to individual downgrading; none is protected by default.

For each facet, the instructor asks: does the evidence show that this student meaningfully contributed to this area? If the answer is no, the facet is graded at a level that reflects the student’s actual contribution rather than the team’s. A student with no traceable contribution across all areas may receive 50% on every facet. A student who communicated but did not produce output may receive a lower score on Teamwork and Design/Implementation while retaining the team’s score on Communication. The adjustment is always proportionate to what the evidence shows, facet by facet.

The best protection is visible, traceable contribution throughout the term:

  • A student should commit code to their own branches and open or close their own issues; work should not be committed by a teammate on their behalf.
  • Each student should author their own sections in progress reports and other documents; having a teammate write their contributions for them undermines the traceability that protects them.
  • All contributions should be measurable and attributable to the student. If a student is doing work that is not easily traceable (e.g., design discussions, testing, documentation), they should make sure to document it in a way that can be corroborated (e.g., meeting notes, issue comments, pull request reviews).
  • If a student is blocked, they should communicate early with their team and their project partner. A student who raises a blocker and adapts is treated very differently from one who goes quiet.
  • Peer evaluations reflect teammates’ real-time view of a student’s work. A student who is having a difficult term should address it in their individual retrospective section and discuss it with the instructor proactively; waiting until grades are posted is too late.