⚙️ Setting Up: Prompt Engineering in Practice🧭 Explore → Plan → Code → Commit🪲 Debugging & Error Fixing🏗️ Design & PoC Development💼 Everyday Automation🧠 Personal Insights✍️ Wrapping Up
Claude Code, GPT-5, and Practical AI Collaboration in Engineering
Written by
It’s been quite some time since AI coding assistants started blending into real-world software development.
Yet, the way each engineer actually uses them varies a lot.
Personally, I’ve built an AI-centric development workflow around Claude Code, combined with tools like MCP (Multi-Context Protocol), gh CLI, Slack MCP, Atlassian MCP, Context7, and Devin Wiki.
In this post, I’ll share how I use Claude Code and GPT-5 in my daily engineering work — and the insights I’ve gained while designing a practical AI-driven development environment.
⚙️ Setting Up: Prompt Engineering in Practice
Since I studied prompt engineering early on, I rarely rely on external prompts or OSS tools.
Instead, I design the prompt structure I need and let Claude Code handle everything end-to-end.
I always maintain a clear structure:
- Role – Define the agent’s responsibility
- Goal – Specify the target outcome
- Instructions – Provide concrete steps
- Precaution – Include constraints or warnings
- /output-style – Usually set to Explanatory
With Claude Sonnet 4.x, context precision became far more important, so I explicitly include which reference materials or repositories the model should use.
For long sessions, I use
/compact regularly to keep the conversation clean and maintain a stable context.🧭 Explore → Plan → Code → Commit
For larger features or refactoring work, I follow this structured flow:
- Explore (Context Gathering)
- Identify relevant repositories, documents, or architecture references
- Instruct the agent to summarize findings in Markdown
- Sometimes test results in parallel with GPT-5 (or Codex) for comparison
- Plan (Strategy & Design)
- Enable use parallel sub-agents for concurrent analysis
- Run up to 10 sub-agents, each with a defined Role and Goal
- e.g. feature design, risk analysis, test plan, documentation, etc.
- Code (Implementation)
- Start a fresh session or run
/compactbefore coding - Define which part to implement → write code → test → review
- For large parts, split work into “implement – test – review” streams
- Commit (PR Creation)
- Let the agent generate the initial PR draft
- Use gh CLI to check out the branch and automate gh pr create
For smaller fixes or quick tasks, I delegate directly to Devin, since the goal is already well-defined.
🪲 Debugging & Error Fixing
- Use Claude Code’s IDE integration to work seamlessly with VSCode
- Run server processes in the background and feed logs via stdin/out
- Let the agent identify root causes and affected code sections
- When I want to learn during debugging, I switch
/output-styleto study, turning it into a form of AI pair-learning.
🏗️ Design & PoC Development
- For proof-of-concepts or architectural design, I mainly use Context7, Devin Wiki, and Sequentialthinking MCP within our internal setup.
- Before Claude Code had Web Search, I used Gemini for context expansion — but since the feature was added natively, that extra step is gone.
- I also leverage parallel sub-agents heavily at this stage,
- cloning multiple repositories (2–3 at once) to analyze architectural references, and aggregating results back into Devin Wiki or Context7 for documentation.
💼 Everyday Automation
AI’s role goes far beyond coding.
I rely on Claude Code for daily communication, documentation, and process automation.
- Fully integrated Slack MCP + Atlassian MCP environment
- When I’m mentioned in long threads, Slack MCP auto-translates and summarizes in Korean
- Automate Confluence documentation, guidelines, and announcements
- Use DeepResearch sessions to identify internal pain points across teams
Recently, I started experimenting with Coderabbit to streamline code review.
Here’s a typical review prompt:
Review the changes in PR #{pr_number}. Let it run as long as it needs (run it in the background) and fix any issues. # Instructions - Checkout PR using the gh cli - Verify the base branch of PR - Proceed with the review using `coderabbit --prompt-only --base #{base_branch}`
Claude Code summarizes the review output, and I consolidate the final comments before posting them back to the PR.
🧠 Personal Insights
- The parallel sub-agent approach boosts productivity dramatically, but system resource limits (like kernel panic) cap it at around 10 agents.
- Running /compact regularly is essential for long-session management.
- Study-mode debugging has evolved into genuine AI-assisted learning.
- GPT-5 vs Claude Sonnet 4.x:
GPT-5 excels in research depth, while Claude shines in contextual continuity.
✍️ Wrapping Up
AI is no longer just a “helper” — it’s becoming a true member of the development team.
From exploration to planning, implementation, and review, AI now integrates seamlessly into every step of the engineering workflow, allowing developers to think deeper and design broader.