Generative AI
Generative AI tools are now part of professional engineering work. Engineers use AI to brainstorm ideas, explore design options, write and review code, generate documentation, and accelerate analysis tasks.
But there is a catch: you are still responsible for the correctness, safety, legality, and professionalism of anything the AI helps you create. An AI tool can write code faster than you can type it. That does not mean the code is correct, secure, or appropriate for your project. Treat AI output the same way you would treat a suggestion from a junior colleague: consider it, verify it, and take ownership of the final result.
What AI Can and Cannot Do
Section titled “What AI Can and Cannot Do”Understanding what AI tools are good at and where they fall short helps you use them effectively rather than fighting their limitations.
What AI helps with
Section titled “What AI helps with”- Brainstorming ideas and exploring alternatives
- Drafting code, tests, documentation, and plans
- Explaining unfamiliar topics, APIs, or code snippets
- Generating sample data, interaction flows, or boilerplate
- Refactoring, debugging, and code review
- Accelerating “first drafts” of writing or implementation
What AI cannot do well
Section titled “What AI cannot do well”- Exercise engineering judgment
- Apply your project’s specific constraints (your team’s architecture, your project partner’s requirements, your deployment environment)
- Produce secure, efficient, or trustworthy code without oversight
- Guarantee correctness: AI frequently hallucinates information, including fabricated libraries, nonexistent APIs, and incorrect citations
- Understand the nuances of your Capstone project’s unique context
- Make ethical decisions
- Incorrect or misleading information presented confidently
- Insecure or poorly optimized code
- Over-reliance leading to lack of understanding: if you cannot explain what the code does, you should not commit it
- Privacy and intellectual property issues (be careful about what you paste into AI tools, especially if your project involves NDA-protected or sensitive data)
AI Coding Tools
Section titled “AI Coding Tools”The landscape changes rapidly. Rather than an exhaustive list, here are the categories of tools and some current options in each.
Chat-Based Assistants
Section titled “Chat-Based Assistants”General-purpose AI models you interact with through conversation. Useful for brainstorming, explaining concepts, drafting documentation, debugging, and exploring design alternatives.
- ChatGPT (OpenAI): strong general reasoning and code generation.
- Claude (Anthropic): strong at long-context analysis, writing, and careful reasoning.
- Gemini (Google): tightly integrated with Google’s ecosystem.
AI Code Editors and Agents
Section titled “AI Code Editors and Agents”Tools that integrate directly into your editor or terminal and can read, write, and modify your codebase. These range from autocomplete assistants to fully agentic tools that can execute multi-step tasks.
- GitHub Copilot: code completion, chat, and agent mode in VS Code and JetBrains. Free for students.
- Cursor: AI-native code editor built on VS Code with strong multi-file editing.
- Claude Code and ChatGPT Codex: terminal-based agentic coding tools.
The distinction between “autocomplete” and “agent” matters. Autocomplete tools suggest the next few lines as you type. Agentic tools can plan multi-step changes, read multiple files, run commands, and modify your codebase autonomously. Agentic tools are more powerful but require more oversight, especially on a team where multiple people are using them simultaneously.
Image and Design Tools
Section titled “Image and Design Tools”- Midjourney, Stable Diffusion, DALL-E: image generation for mockups, assets, or diagrams. Check licenses before using generated assets in your project.
AI Search Tools
Section titled “AI Search Tools”- Perplexity, ChatGPT Search: research and summarization with source citations. Always verify the facts and check that cited sources actually exist.
Setting Up AI Tools for Your Project
Section titled “Setting Up AI Tools for Your Project”The difference between a helpful AI assistant and a frustrating one is almost entirely about context. An AI tool with no knowledge of your project will produce generic suggestions. The same tool, given your project’s architecture, conventions, and constraints, will produce suggestions that actually fit.
Project-Level Instructions
Section titled “Project-Level Instructions”Most AI coding tools support some form of project-level configuration that persists across sessions:
- Claude Code:
CLAUDE.mdfile in your repository root. - Cursor:
.cursor/rules/directory or.cursorrulesfile. - GitHub Copilot:
.github/copilot-instructions.mdfile.
These files are checked into version control, which means the entire team shares the same AI context. This is one of the most effective ways to keep AI-generated code consistent across team members.
What to include in project-level instructions:
- A brief project overview (what it does, who it is for).
- The technology stack (languages, frameworks, key libraries).
- Coding conventions (naming, file structure, error handling patterns).
- Architectural constraints (“we use a monolith, not microservices”; “all data access goes through the repository layer”).
- Testing expectations (what to test, which framework, where tests live).
- Anything the AI should not do (e.g., “do not add new dependencies without discussion”; “do not refactor code outside the scope of the current task”).
MCP Servers
Section titled “MCP Servers”The Model Context Protocol (MCP) allows AI tools to connect to external data sources and services. Some useful MCP servers:
- Context7: provides up-to-date library and framework documentation to your AI tool, reducing hallucinated API calls.
- Playwright MCP: allows your AI tool to interact with a browser for testing and debugging web applications.
- Database MCP servers: let your AI tool query your database schema directly rather than guessing at table structures.
Many other MCP servers exist and new ones appear frequently. Explore what is available for your stack and tell your AI tool how and when to use them.
Skills
Section titled “Skills”Skills (sometimes called slash commands, custom commands, or agent skills) are reusable prompts or workflows that you define once and invoke repeatedly. They let you encode team-specific processes into your AI tool so that common tasks are performed consistently.
For example, a team might define skills for:
- Code review: a skill that checks a pull request against the team’s coding standards and architectural constraints.
- Commit messages: a skill that generates commit messages following the team’s convention.
- Test generation: a skill that writes tests matching the project’s testing patterns and framework.
- Documentation: a skill that generates or updates documentation in the team’s preferred format.
Skills are particularly valuable on a team because they standardize how the AI performs recurring tasks. Without them, each team member prompts the AI differently and gets different results. With a shared set of skills checked into the repository, everyone gets the same behavior.
How skills are defined depends on the tool:
- Claude Code: skills are Markdown files in
.claude/skills/<skill-name>/SKILL.md, each in its own directory. They can be invoked as slash commands. - Cursor: custom commands or notepad entries that encode reusable prompts.
- GitHub Copilot: reusable prompts configured through the Copilot instructions file or VS Code tasks.
Like project-level instructions, skills should be version-controlled so the whole team benefits from improvements.
Memory and Context
Section titled “Memory and Context”Some tools support persistent memory across sessions. Use this to store project-specific knowledge that the AI should retain: architectural decisions, common patterns, known gotchas, or recurring instructions you find yourself repeating.
Using AI Effectively
Section titled “Using AI Effectively”The difference between productive AI usage and frustrating AI usage usually comes down to a few habits.
Start with clear intent
Section titled “Start with clear intent”Before asking an AI tool to generate code, know what you want. “Build the authentication system” is a poor prompt. “Add a login endpoint that accepts email and password, validates against the users table, and returns a JWT” gives the AI enough context to produce something useful. The clearer your intent, the less time you spend correcting the output.
Verify, do not trust
Section titled “Verify, do not trust”AI-generated code can look correct while being subtly wrong: off-by-one errors, missing edge cases, insecure defaults, hallucinated library methods. Read what the AI produces. Run it. Test the edge cases. If you cannot explain why the code works, you do not understand it well enough to commit it.
Use AI to learn, not just to produce
Section titled “Use AI to learn, not just to produce”When the AI generates code you do not understand, ask it to explain. Use it as a learning tool, not just a production tool. The goal of Capstone is to grow as an engineer. Shipping code you cannot maintain or debug is not growth; it is debt.
Know when to stop prompting
Section titled “Know when to stop prompting”If you have gone back and forth with an AI tool more than a few times on the same problem and the output is still wrong, step back. Read the documentation. Look at examples. Ask a teammate. Sometimes the fastest path is not through the AI.
Choosing the Right Tool
Section titled “Choosing the Right Tool”Things change rapidly in the AI space. New models are released frequently, each with different strengths. Rather than chasing benchmarks, consider what matters for your workflow:
- Context window: how much of your codebase can the tool see at once? Larger context windows help with multi-file tasks.
- Speed vs. quality: faster models are better for autocomplete; slower, more capable models are better for complex reasoning and multi-step changes.
- Integration: does the tool work with your editor, terminal, and version control? The best tool is the one you actually use.
- Cost: many tools have free tiers or student pricing. GitHub Copilot Pro is free with GitHub Education.
If you want to compare models, these resources track current benchmarks:
AI on a Team
Section titled “AI on a Team”Using AI tools as an individual is straightforward. Using them on a team requires coordination, because each person’s AI operates in its own context and has no awareness of what other team members (or their AI tools) are doing. Without shared norms, the codebase drifts toward inconsistency.
This is covered in detail in the technical design and working agreement guides:
Some Truths About AI Tools
Section titled “Some Truths About AI Tools”- AI tools make fast developers faster and careless developers more dangerous. The bottleneck is judgment, not typing speed.
- The quality of AI output is proportional to the quality of context you provide. Invest in good project-level instructions.
- AI-generated code is not inherently better or worse than human-written code. It still needs review, testing, and maintenance.
- If you find yourself unable to debug or modify AI-generated code, you have a problem. You should understand everything in your codebase.
- AI tools are evolving rapidly. What is true today may not be true next term. Stay curious and adapt.
- The teams that use AI most effectively are not the ones that generate the most code. They are the ones that generate the right code and catch the wrong code early.