Skip to main content

Code Generators / Coding Generators

Gemini Code Assist (GCA) / Gemini CLI

Github Copilot

Commands

  • @workspace
  • /doc
  • /explain
  • /fix
  • /generate
  • /optimize
  • /tests
  • Start chatting with copilot - Opt + Cmd + I (chat with copilot)

Help

You can ask me general programming questions, or chat with the following participants which have specialized expertise and can perform actions:

  • @workspace - Ask about your workspace
    • /explain - Explain how the code in your active editor works
    • /tests - Generate unit tests for the selected code
    • /fix - Propose a fix for the problems in the selected code
    • /new - Scaffold code for a new file or project in a workspace
    • /newNotebook - Create a new Jupyter Notebook
    • /fixTestFailure - Propose a fix for the failing test
    • /setupTests - Set up tests in your project (Experimental)
  • @vscode - Ask questions about VS Code
    • /search - Generate query parameters for workspace search
    • /startDebugging - Generate launch config and start debugging in VS Code (Experimental)
  • @terminal - Ask how to do something in the terminal
    • /explain - Explain something in the terminal
  • @github - Get answers grounded in web search, code search, and your enterprise's knowledge bases

You can also help me understand your question by using the following variables to give me extra context:

  • #editor - The visible source code in the active editor
  • #selection - The current selection in the active editor
  • #terminalLastCommand - The active terminal's last run command
  • #terminalSelection - The active terminal's selection
  • #file - Choose a file in the workspace

To have a great conversation, ask me questions as if I was a real programmer:

  • Show me the code you want to talk about by having the files open and selecting the most important lines.
  • Make refinements by asking me follow-up questions, adding clarifications, providing errors, etc.
  • Review my suggested code and tell me about issues or improvements, so I can iterate on it.

You can also ask me questions about your editor selection by starting an inline chat session (⌘I).

Free tier

  • 2,000 intelligent code completions a month: Get context-aware code suggestions that draw context from your GitHub projects and VS Code workspace.
  • 50 Copilot Chat messages a month: Ask Copilot for help understanding code, refactoring something, or debugging an issue.
  • Choose your AI model: Pick between Claude 3.5 Sonnet or OpenAI GPT-4o.
  • Make changes to multiple files with Copilot Edits: Tackle changes across multiple files with Copilot Edits.
  • Support for the Copilot Extensions ecosystem: Access third-party agents designed for tasks such as querying Stack Overflow or searching the web with Perplexity.
  • Choose where you build: Enjoy support in VS Code and across GitHub.

Editors

Comparisons

  • Claude Opus 4.5: First model to break 80% on SWE-bench. 67% cheaper than before. Best for backend development and complex refactoring. 9.5/10.
  • Gemini 3: 1 million token context window, 1501 Elo score. Better at frontend and creative work than Opus. Accessible across the Gemini Products. 8.5/10.
  • Nano Banana Pro: Google’s image generator that finally works. 4K output, handles 5 people consistently, integrates everywhere. 8/10.
  • Antigravity: Google’s new IDE. Buggy, crashes often, not ready. 6.5/10. Stick with Cursor or Windsurf / Claude Code or Faactory.ai for now.
  • What I actually use: Gemini 3 for frontend, Opus 4.5 for backend, Factory.ai or Claude Code for serious coding work.

Claude Opus 4.5 vs Gemini 3, Nano Banana Pro and Google Antigravity IDE: Nov 2025 Mega Review

Claude Code

Claude Code: Deep Coding at Terminal Velocity \ Anthropic

Commands

brew install --cask claude-code

claude
claude --login
claude --think
# Print Mode: Runs a one-off task (e.g., "fix tests") and exits.
claude -p "query"
  • /compact: Manually shrinks the conversation history to save tokens while preserving key context.
  • /init: Initializes a CLAUDE.md file in your project to store local coding standards and instructions.
  • /model: Quickly switch between models (e.g., switching to haiku for fast tasks or opus for complex logic).
  • /review: Triggers a code review of your current changes or a specific file.
  • /rewind: (Double-tap Esc or type /rewind) Opens a menu to undo recent code changes or revert the conversation.
  • /mcp: Manages Model Context Protocol servers (connecting Claude to tools like Jira, Slack, or databases).
  • /clear: This wipes the old context
  • /powerup: interactive lessons teaching Claude Code features with animated demos

Monitoring

  • /cost: It will display a breakdown of Input Tokens, Output Tokens, and Cache Hits/Misses, along with a total USD estimate for the current session.
  • /stats: Displays your current token usage and rate limit status.

GitHub - Maciek-roboblog/Claude-Code-Usage-Monitor: Real-time Claude Code usage monitor with predictions and warnings · GitHub ⭐ 7.2k

# Install directly from PyPI with uv (easiest)
uv tool install claude-monitor

# Run from anywhere
claude-monitor # or cmonitor, ccmonitor for short

GitHub - hoangsonww/Claude-Code-Agent-Monitor: A real-time monitoring dashboard for Claude Code agents, built with Node.js, Express, React, and WebSockets. It tracks sessions, agent activity, tool usage, and subagent orchestration through Claude Code hooks, providing live analytics, a Kanban status board, status notifications, and an interactive web interface. · GitHub ⭐ 27

Conversations - ~/.claude/projects/

GitHub - d-kimuson/claude-code-viewer: A full-featured web-based Claude Code client that provides complete interactive functionality for managing Claude Code projects · GitHub ⭐ 1.0k

npm install -g @kimuson/claude-code-viewer
claude-code-viewer --port 3400

GSD

GSD 2.0

The original GSD was a collection of markdown prompts installed into ~/.claude/commands/. It relied entirely on the LLM reading those prompts and doing the right thing. That worked surprisingly well — but it had hard limits:

  • No context control. The LLM accumulated garbage over a long session. Quality degraded.
  • No real automation. "Auto mode" was the LLM calling itself in a loop, burning context on orchestration overhead.
  • No crash recovery. If the session died mid-task, you started over.
  • No observability. No cost tracking, no progress dashboard, no stuck detection.

GSD v2 solves all of these because it's not a prompt framework anymore — it's a TypeScript application that controls the agent session.

npm install -g gsd-pi@latest

# From within the project directory
/gsd migrate

# Or specify a path
/gsd migrate ~/projects/my-old-project

/gsd auto

`/gsd` and `/gsd next` # — Step Mode

GitHub - gsd-build/gsd-2: A powerful meta-prompting, context engineering and spec-driven development system that enables agents to work for long periods of time autonomously without losing track of the big picture · GitHub ⭐ 3.4k

GSD - Get Shit Done | AI Coding Framework

GSD 1.0

GitHub - gsd-build/get-shit-done: A light-weight and powerful meta-prompting, context engineering and spec-driven development system for Claude Code by TÂCHES. · GitHub ⭐ 43k

# start claude
claude --dangerously-skip-permissions
claude --enable-auto-mode

/gsd:update
/gsd:autonomous
/gsd:new-project
/gsd:discuss-phase 1
/gsd:plan-phase 1
/gsd:execute-phase 1
/gsd:verify-work 1
/gsd:resume-work

/gsd:do
/gsd:quick
/gsd:fast

/gsd:new-milestone

/gsd:stats
/gsd:progress
/gsd:settings

/gsd:settings workflow.skip_discuss false
/gsd:autonomous

# see all workflows of GSD
ls ~/.claude/get-shit-done/workflows/
~/.gsd/defaults.json
{
"mode": "yolo",
"granularity": "coarse",
"model_profile": "balanced",
"commit_docs": true,
"parallelization": true,
"git": {
"branching_strategy": "none",
"quick_branch_template": null
},
"workflow": {
"research": false,
"plan_check": false,
"verifier": false,
"auto_advance": true,
"nyquist_validation": false,
"ui_phase": false,
"ui_safety_gate": false,
"research_before_questions": true,
"skip_discuss": true
},
"hooks": {
"context_warnings": false
}
}

Agent System Overview - Get Shit Done

Others / Agents / Skills

Codebase

ChatDev 2.0

ChatDev has evolved from a specialized software development multi-agent system into a comprehensive multi-agent orchestration platform.

  • ChatDev 2.0 (DevAll) is a Zero-Code Multi-Agent Platform for "Developing Everything". It empowers users to rapidly build and execute customized multi-agent systems through simple configuration. No coding is required—users can define agents, workflows, and tasks to orchestrate complex scenarios such as data visualization, 3D generation, and deep research.

GitHub - OpenBMB/ChatDev: ChatDev 2.0: Dev All through LLM-powered Multi-Agent Collaboration · GitHub ⭐ 32k

Others

SAAS