Skip to content

Commit e9be65b

Browse files
committed
initial new ui
1 parent 6a8e5f9 commit e9be65b

File tree

8,293 files changed

+2725894
-21
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

8,293 files changed

+2725894
-21
lines changed

.claude/settings.json

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
{
2+
"permissions": {
3+
"allow": [
4+
"Bash(xargs:*)"
5+
]
6+
}
7+
}

mcp_agent.config.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ mcp:
2020
bocha-mcp:
2121
args:
2222
- tools/bocha_search_server.py
23-
command: python3
23+
command: python
2424
env:
2525
BOCHA_API_KEY: ''
2626
PYTHONPATH: .
@@ -105,12 +105,12 @@ mcp:
105105
# LLM Provider Priority (选择使用哪个LLM / Choose which LLM to use)
106106
# Options: "anthropic", "google", "openai"
107107
# If not set or provider unavailable, will fallback to first available provider
108-
llm_provider: "google" # 设置为 "google", "anthropic", 或 "openai"
108+
llm_provider: "openai" # 设置为 "google", "anthropic", 或 "openai"
109109

110110
openai:
111111
base_max_tokens: 40000
112112
# default_model: google/gemini-2.5-pro
113-
default_model: anthropic/claude-sonnet-4.5
113+
default_model: google/gemini-3-flash-preview
114114
# default_model: openai/gpt-oss-120b
115115
# default_model: deepseek/deepseek-v3.2-exp
116116
# default_model: moonshotai/kimi-k2-thinking

mcp_agent.secrets.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
# API keys for LLM providers
2-
# You can either fill these in directly, or use environment variables:
2+
# 配置文件优先于环境变量 (Config file takes priority over env vars)
3+
# 环境变量仅在配置文件为空时作为备选:
34
# - GOOGLE_API_KEY / GEMINI_API_KEY
45
# - ANTHROPIC_API_KEY
56
# - OPENAI_API_KEY
6-
# Environment variables take precedence over values in this file.
77

88
openai:
99
api_key: ""
10-
base_url: ""
10+
base_url: "https://openrouter.ai/api/v1"
1111
anthropic:
1212
api_key: ""
1313
google:

new_ui/README.md

Lines changed: 152 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,152 @@
1+
# DeepCode New UI
2+
3+
Modern, intelligent UI for DeepCode - AI-powered code generation platform.
4+
5+
## Technology Stack
6+
7+
- **Backend**: FastAPI (Python)
8+
- **Frontend**: React 18 + TypeScript + Vite
9+
- **Styling**: Tailwind CSS + shadcn/ui
10+
- **State Management**: Zustand
11+
- **Real-time Communication**: WebSocket
12+
- **Workflow Visualization**: React Flow
13+
- **Code Display**: Monaco Editor
14+
15+
## Features
16+
17+
### Intelligent Features
18+
19+
1. **Real-time Streaming Output** - Watch code generation in real-time, like ChatGPT
20+
2. **Smart Context Awareness** - Remembers conversation history, provides intelligent suggestions
21+
3. **Adaptive Interface** - Layout adjusts based on task type
22+
4. **Visual Workflow** - Draggable flow-chart style task visualization
23+
24+
### Design Style
25+
26+
- Clean, modern design inspired by Notion/Linear
27+
- Light theme with blue accent colors
28+
- Inter font for text, JetBrains Mono for code
29+
30+
## Project Structure
31+
32+
```
33+
new_ui/
34+
├── backend/ # FastAPI Backend
35+
│ ├── main.py # Entry point
36+
│ ├── config.py # Configuration
37+
│ ├── api/
38+
│ │ ├── routes/ # REST API endpoints
39+
│ │ └── websockets/ # WebSocket handlers
40+
│ ├── services/ # Business logic
41+
│ └── models/ # Pydantic models
42+
43+
├── frontend/ # React Frontend
44+
│ ├── src/
45+
│ │ ├── components/ # React components
46+
│ │ ├── pages/ # Page components
47+
│ │ ├── hooks/ # Custom hooks
48+
│ │ ├── stores/ # Zustand stores
49+
│ │ ├── services/ # API client
50+
│ │ └── types/ # TypeScript types
51+
│ ├── package.json
52+
│ └── vite.config.ts
53+
54+
└── scripts/
55+
├── start_dev.sh # Development startup
56+
└── build.sh # Production build
57+
```
58+
59+
## Quick Start
60+
61+
### Prerequisites
62+
63+
- Python 3.10+
64+
- Node.js 18+
65+
- npm or yarn
66+
67+
### Development
68+
69+
1. **Start both backend and frontend:**
70+
71+
```bash
72+
cd new_ui
73+
chmod +x scripts/start_dev.sh
74+
./scripts/start_dev.sh
75+
```
76+
77+
2. **Or start separately:**
78+
79+
Backend:
80+
```bash
81+
cd new_ui/backend
82+
pip install -r requirements.txt # First time only
83+
uvicorn main:app --reload --port 8000
84+
```
85+
86+
Frontend:
87+
```bash
88+
cd new_ui/frontend
89+
npm install # First time only
90+
npm run dev
91+
```
92+
93+
3. **Access the application:**
94+
- Frontend: http://localhost:5173
95+
- Backend API: http://localhost:8000
96+
- API Documentation: http://localhost:8000/docs
97+
98+
### Production Build
99+
100+
```bash
101+
cd new_ui
102+
chmod +x scripts/build.sh
103+
./scripts/build.sh
104+
```
105+
106+
## API Endpoints
107+
108+
### REST API
109+
110+
| Method | Endpoint | Description |
111+
|--------|----------|-------------|
112+
| POST | `/api/v1/workflows/paper-to-code` | Start paper-to-code workflow |
113+
| POST | `/api/v1/workflows/chat-planning` | Start chat-based planning |
114+
| GET | `/api/v1/workflows/status/{task_id}` | Get workflow status |
115+
| POST | `/api/v1/requirements/questions` | Generate guiding questions |
116+
| POST | `/api/v1/requirements/summarize` | Summarize requirements |
117+
| POST | `/api/v1/files/upload` | Upload file |
118+
| GET | `/api/v1/config/settings` | Get settings |
119+
120+
### WebSocket Endpoints
121+
122+
| Endpoint | Description |
123+
|----------|-------------|
124+
| `/ws/workflow/{task_id}` | Real-time workflow progress |
125+
| `/ws/code-stream/{task_id}` | Streaming code output |
126+
| `/ws/logs/{session_id}` | Live log streaming |
127+
128+
## Configuration
129+
130+
The new UI reads configuration from the existing DeepCode config files:
131+
132+
- `mcp_agent.config.yaml` - LLM provider, models, MCP server settings
133+
- `mcp_agent.secrets.yaml` - API keys
134+
135+
## Integration
136+
137+
The new UI integrates with existing DeepCode components:
138+
139+
- `workflows/agent_orchestration_engine.py` - Core workflow execution
140+
- `workflows/agents/` - Specialized agents
141+
- `utils/llm_utils.py` - LLM provider management
142+
143+
## Browser Support
144+
145+
- Chrome (recommended)
146+
- Firefox
147+
- Safari
148+
- Edge
149+
150+
## License
151+
152+
MIT License - see main DeepCode license.

new_ui/backend/__init__.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
"""
2+
DeepCode New UI Backend
3+
FastAPI-based backend for the new DeepCode UI
4+
"""
5+
6+
__version__ = "1.0.0"

new_ui/backend/api/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
"""API package"""
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
"""API Routes"""
Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
"""
2+
Configuration API Routes
3+
Handles LLM provider and settings management
4+
"""
5+
6+
from fastapi import APIRouter, HTTPException
7+
import yaml
8+
9+
from settings import (
10+
load_mcp_config,
11+
load_secrets,
12+
get_llm_provider,
13+
get_llm_models,
14+
is_indexing_enabled,
15+
CONFIG_PATH,
16+
)
17+
from models.requests import LLMProviderUpdateRequest
18+
from models.responses import ConfigResponse, SettingsResponse
19+
20+
21+
router = APIRouter()
22+
23+
24+
@router.get("/settings", response_model=SettingsResponse)
25+
async def get_settings():
26+
"""Get current application settings"""
27+
config = load_mcp_config()
28+
provider = get_llm_provider()
29+
models = get_llm_models(provider)
30+
31+
return SettingsResponse(
32+
llm_provider=provider,
33+
models=models,
34+
indexing_enabled=is_indexing_enabled(),
35+
document_segmentation=config.get("document_segmentation", {}),
36+
)
37+
38+
39+
@router.get("/llm-providers", response_model=ConfigResponse)
40+
async def get_llm_providers():
41+
"""Get available LLM providers and their configurations"""
42+
secrets = load_secrets()
43+
config = load_mcp_config()
44+
45+
# Get available providers (those with API keys configured)
46+
available_providers = []
47+
for provider in ["google", "anthropic", "openai"]:
48+
if secrets.get(provider, {}).get("api_key"):
49+
available_providers.append(provider)
50+
51+
current_provider = get_llm_provider()
52+
models = get_llm_models(current_provider)
53+
54+
return ConfigResponse(
55+
llm_provider=current_provider,
56+
available_providers=available_providers,
57+
models=models,
58+
indexing_enabled=is_indexing_enabled(),
59+
)
60+
61+
62+
@router.put("/llm-provider")
63+
async def set_llm_provider(request: LLMProviderUpdateRequest):
64+
"""Update the preferred LLM provider"""
65+
secrets = load_secrets()
66+
67+
# Verify provider has an API key
68+
if not secrets.get(request.provider, {}).get("api_key"):
69+
raise HTTPException(
70+
status_code=400,
71+
detail=f"Provider '{request.provider}' does not have an API key configured",
72+
)
73+
74+
# Update config file
75+
try:
76+
config = load_mcp_config()
77+
config["llm_provider"] = request.provider
78+
79+
with open(CONFIG_PATH, "w", encoding="utf-8") as f:
80+
yaml.dump(config, f, default_flow_style=False)
81+
82+
return {
83+
"status": "success",
84+
"message": f"LLM provider updated to '{request.provider}'",
85+
"provider": request.provider,
86+
}
87+
88+
except Exception as e:
89+
raise HTTPException(
90+
status_code=500,
91+
detail=f"Failed to update configuration: {str(e)}",
92+
)

0 commit comments

Comments
 (0)