Vibe Coding: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
|||
| Line 17: | Line 17: | ||
qwen2.5-coder-7b | qwen2.5-coder-7b | ||
<pre> | <pre> | ||
llama-server -m | llama-server -m GLM-4.7-Flash-Q4_K_M.gguf \ | ||
--host 0.0.0.0 \ | --host 0.0.0.0 \ | ||
--port 9011 \ | --port 9011 \ | ||
-ngl | -ngl 8 -c 16384 --jinja --flash-attn on | ||
</pre> | </pre> | ||
qwen2.5-coder-7b-instruct-q4_k_m.gguf | qwen2.5-coder-7b-instruct-q4_k_m.gguf | ||
| Line 35: | Line 35: | ||
-ngl 13 -c 16384 --jinja --flash-attn on | -ngl 13 -c 16384 --jinja --flash-attn on | ||
</pre> | </pre> | ||
=== Goose === | === Goose === | ||
Revision as of 08:16, 22 January 2026
llama.cpp
benchmarks
GTX 6gig qwen2.5-coder-1.5b
MODEL=qwen2.5-coder-1.5b-instruct-q4_k_m.gguf llama-server -m $MODEL \ --host 0.0.0.0 \ --port 9011 \ -ngl 30 -c 16384 --jinja --flash-attn on
qwen2.5-coder-1.5b-instruct-q4_k_m.gguf Reading Generation 1,159 tokens 8.48s 136.62 tokens/s
qwen2.5-coder-7b
llama-server -m GLM-4.7-Flash-Q4_K_M.gguf \ --host 0.0.0.0 \ --port 9011 \ -ngl 8 -c 16384 --jinja --flash-attn on
qwen2.5-coder-7b-instruct-q4_k_m.gguf Reading Generation 1,968 tokens 330.68s 5.95 tokens/s
llama-server -m GLM-4.7-Flash-Q4_K_M.gguf \ --host 0.0.0.0 \ --port 9011 \ -ngl 13 -c 16384 --jinja --flash-attn on
Goose
Claude Code
MCP Server
https://docs.anthropic.com/en/docs/claude-code/mcp#add-mcp-servers-from-json-configuration
Add an MCP server from JSON
# Basic syntax
claude mcp add-json <name> '<json>'
# Example: Adding a stdio server with JSON configuration
claude mcp add-json weather-api '{"type":"stdio","command":"/path/to/weather-cli","args":["--api-key","abc123"],"env":{"CACHE_DIR":"/tmp"}}'
Verify the server was added
claude mcp get weather-api
weather-api:
Scope: Local (private to you in this project)
Type: stdio
Command: /path/to/weather-cli
Args: --api-key abc123
Environment:
CACHE_DIR=/tmp
To remove this server, run: claude mcp remove "weather-api" -s local
https://www.npmjs.com/package/@modelcontextprotocol/server-puppeteer
Environment Variable: Set PUPPETEER_LAUNCH_OPTIONS with a JSON-encoded string in the MCP configuration's env parameter:
{
"mcpServers": {
"mcp-puppeteer": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-puppeteer"],
"env": {
"PUPPETEER_LAUNCH_OPTIONS": "{ \"headless\": false, \"executablePath\": \"C:/Program Files/Google/Chrome/Application/chrome.exe\", \"args\": [] }",
"ALLOW_DANGEROUS": "true"
}
}
}
}
Other
https://github.com/1rgs/claude-code-proxy
git clone https://github.com/1rgs/claude-code-openai.git cd claude-code-openai curl -LsSf https://astral.sh/uv/install.sh | sh uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload OPENAI_API_KEY="your-openai-key" GEMINI_API_KEY="your-google-key" PREFERRED_PROVIDER="openai" BIG_MODEL="gpt-4o" # Example specific model SMALL_MODEL="gpt-4o-mini" # Example specific model
https://github.com/badlogic/lemmy/tree/main/apps/claude-bridge
npm install -g @mariozechner/claude-bridge
# Set API keys (optional - can specify per-command with --apiKey) export OPENAI_API_KEY=sk-... export GOOGLE_API_KEY=... # Discovery workflow claude-bridge # Show available providers claude-bridge openai # Show OpenAI models claude-bridge openai gpt-4o # Run Claude Code with GPT-4 # Advanced usage claude-bridge openai gpt-4o --apiKey sk-... # Custom API key claude-bridge openai llama3.2 --baseURL http://localhost:11434/v1 # Local Ollama claude-bridge openai gpt-4o --baseURL https://openrouter.ai/api/v1 --apiKey sk-or-... # OpenRouter claude-bridge openai gpt-4o --debug # Enable debug logs claude-bridge --trace -p "Hello world" # Spy on Claude ↔ Anthropic communication # All Claude Code arguments work claude-bridge google gemini-2.5-pro-preview-05-06 --resume --continue claude-bridge openai o4-mini -p "Hello world"