Skip to content

Project Partner Evaluation

This page contains rubrics used for project partner evaluations, which account for 25% of your total grade. Your project partner will assess your team using these rubrics at the end of each term.

Project partner assessments use different approaches depending on the term. In Fall and Winter, the evaluation is progress-focused using adapted criteria that assess development milestones and learning growth. In Spring, the evaluation shifts to an outcome-focused approach that assesses final deliverables and project outcomes. Each facet will have a different weight depending on the term, see the grade distribution.

Students should discuss expectations with their project partner or mentor at the beginning of each term and review progress regularly.

The evaluation will be colletected through a survey sent to each project partner at the end of each term. The survey will include the rubrics below, which you can also use to self-assess your progress and development.

Reflection

The purpose of the Reflection facet is to evaluate your ability to critically analyze learnings and project experiences.

PointsCriteria
100Demonstrates clear learning progression and applies insights to improve team processes. Regular reflection evident.
90Shows good learning progression with some application of insights. Regular reflection with minor gaps.
80Adequate learning progression. Basic reflection on experiences with some insights.
70Limited learning progression. Minimal reflection with general observations.
50No clear learning progression. Lacks meaningful reflection.

Requirements and Specifications

The purpose of the Requirements and Specifications facet is to evaluate your ability to gather, document, and prioritize project requirements.

PointsCriteria
100Requirements gathering shows clear progress and stakeholder engagement. Well-documented evolution of understanding.
90Good progress in requirements gathering. Most stakeholder needs identified with documentation improving.
80Adequate progress in requirements. Basic stakeholder engagement with some documentation.
70Limited progress in requirements gathering. Minimal stakeholder engagement.
50No clear progress in requirements. Stakeholder needs remain unclear.

Stakeholders refers to the project partners, which can be faculty members, students, industry partners, or else. Make sure that the goals of the project are stated, clear, and measurable. We can’t evaluate requirements otherwise.

Design, Implementation, and Deployment

This rubric is very broad and we don’t expect to hit a hundred points before Winter or Spring. Fall term does not put a lot of weight on this rubric, but your goal is still to get the implementation and deployment under way to hit the ground running in Winter.

PointsCriteria
100Significant progress in design and implementation. Clear development milestones achieved with good documentation.
90Good progress with most milestones met. Implementation advancing steadily with adequate documentation.
80Adequate progress with some milestones met. Basic implementation progress.
70Limited progress. Few milestones achieved with minimal implementation.
50No clear progress. Implementation has not advanced meaningfully.

We use the term “deployment” very loosely here. If you have a research project, this means that your code or artifacts are well-documented (think reusable/reproducible). If you have a FOSS project, it means your patches include all the necessary changes (incl. to documentation) and successfully works within the existing codebase with no regression.

Verification and Validation

The Verification and Validation facet looks at the outcome of your Capstone project, not just the output.

In Spring, the grading scale is different depending on the project category. If your project does not fit any category or if your project partner has other expectations, they will be able to provide a custom grade on the scale [50,100]. Please set expectations with your project partner at the beginning of each term.

Fall & Winter Terms

PointsCriteria
100Clear testing and validation strategy in place. Regular feedback collection and incorporation.
90Good validation approach developing. Some testing and feedback collection evident.
80Basic validation planning. Limited testing or feedback collection.
70Minimal validation planning. Little evidence of testing approach.
50No clear validation strategy. No testing or feedback collection.

FOSS

This category involves contributing patches to an existing Free and Open-Source Software project. Examples include writing patches for the Rust compiler, the Xen hypervisor, the Habitica todo-list game, or the OSU Open Source Lab repositories.

PointsCriteria
100Patches accepted, positive mentions in press/release.
90Patches accepted.
85Patches accepted but then reverted due to bug/issue.
80Patches submitted and reviewed.
75Patches submitted.
70Patch appears to work on student computers.
50Patch is vaporware.

Research

Collaborate with a professor on a research topic, aiming to publish a small paper with your findings.

PointsCriteria
100Novel results or context or paper published, etc.
90Prototype works on a wide range of reasonable inputs and some challenging ones.
80Prototype works on reasonable inputs.
70Prototype works on trivial inputs.
50Prototype is vaporware.

Consultancy

Develop software for a specific external project partner.

PointsCriteria
100System is in production and is public-facing or part of critical operations.
90Project partner is actively working to integrate system into production, and system is public-facing or part of critical operations.
80Project partner feedback on an earlier prototype; concerns have been addressed in newer version.
70Project partner feedback on an earlier prototype.
50System diverges significantly from Project partner requirements; project partner does not intend to use the system; team has stopped speaking to project partner.

New Product or Game

This category involves creating a new product or game, which may or may not become a viable business.

PointsCriteria
100Hundreds of light users or tens of heavy users or positive mention in mainstream/industry press or winning a reputable startup/gaming pitch competition.
90Two dozens users you don’t know or rigorous user study.
80A dozen users you don’t know or user study.
70Friends have tried your software.
50No users, nor user testing.

Teamwork

Capstone is fundamentally a team effort. Teamwork will be judged on the team and individual level.

PointsCriteria
100Actively contributes to team success. Demonstrates leadership, collaboration, and conflict resolution skills.
90Contributes effectively to team goals. Shows good collaboration and some leadership or conflict resolution.
80Participates in team activities and contributes to goals. Basic collaboration skills.
70Inconsistent participation. Limited contribution to team goals and collaboration.
50Minimal participation or negative impact on team dynamics.

In addition to the general rubric for teamwork, we’re also using the Comprehensive Assessment of Team Member Effectiveness (CATME). The CATME Five Teamwork Dimensions will only be used for intra-groups peer reviews.

Communication

The Communication facet evaluates sudents’ ability to effectively convey ideas, progress, and outcomes through various means such as email updates, presentations, documentation, and discussions.

PointsCriteria
100Clear, concise, and effective communication. Excellent presentations and well-structured documentation.
90Good communication skills. Effective presentations and documentation with minor issues.
80Adequate communication. Presentations and documentation are clear but may lack polish.
70Basic communication skills. Presentations and documentation are understandable but with notable issues.
50Poor communication. Presentations and documentation are unclear or ineffective.