Prometheus
🤖
Hello! I'm Prometheus. Ask me anything about your ingested documents, or use the panel on the right to add new documents.
/
Requirements by Level
STK
SYS
SWR
By Status
Draft
Agreed
Impl.
Verified
By Priority
High
Med
Low

📋 Requirement Generation

▲ Hide
Clear:

📋 Generated Requirements

▲ Hide
No requirements yet.
Run the pipeline above.
Select a requirement to view details.

🔗 Traceability Matrix

▼ Show

📤 Export Documents

Applied to all exported files

📦 Requirement Modules

▼ Show
Loading…

📸 Baselines

▼ Show
Loading…

✅ Formal Reviews

▼ Show
Loading…

📌 Requirement Versions

▼ Show
Loading…

⚡ KPI Matrix & Test Cases

▼ Show
Click "Refresh Matrix" to load KPIs from system and software requirements.

📚 Lekha — Lessons Learned

▼ Show
Loading…

📄 Source

📋 Change History
📌 Lesson Versions ▼ Show
🐛 Issue Debugging
Common errors, their root causes, and step-by-step fixes.
"LLM returned invalid JSON" during requirements pipeline

Cause: The requirements tools expect Claude to output strict JSON only. Occasionally the model adds explanatory text, markdown headers, or partial responses when the PRD is very large or ambiguous.

Fix:

  • The system already strips ```json code fences automatically — this catches ~95% of cases.
  • If it still fails, retry the failed step individually (the buttons run a single step at a time).
  • Check the server console for the raw LLM output (first 500 chars are logged in the error).
  • If your PRD is very large, the system truncates at 80,000 characters. Try splitting into focused sections.
💡 Running steps individually (not "Run Full Pipeline") lets you retry only the failed step without re-extracting everything.
📭 Chat returns "No context available" or unhelpful answers

Cause: The FAISS index is empty, or the retrieval query does not match any stored chunks with similarity ≤ 1.5 (L2 distance threshold).

Fix:

  • Check Memory Status in the sidebar — "Chunks" should be > 0.
  • If chunks = 0, ingest your documents first via the Ingest panel or POST /ingest.
  • Try rephrasing the question using vocabulary from the document (e.g. exact section names, technical terms).
  • If the FAISS index files (data/faiss_index.index, data/faiss_index.meta) were deleted, re-ingest all documents.
📂 "File not found: data/…" — path errors

Cause: The filepath provided doesn't match the actual file on disk.

Common mistakes:

  • Using absolute paths: /data/my-doc.pdf → use data/my-doc.pdf (leading / is auto-stripped, but check casing).
  • Spaces in folder names: "data/VIC-PTG_Files" vs "data/VIC -PTG_Files" — preserve exact spacing.
  • Wrong extension: .PDF vs .pdf (Linux is case-sensitive).
⚠ All paths must be relative to the project root and must be inside the data/ directory for write operations.
🔗 "No stakeholder requirements found" — pipeline order error

Cause: You attempted to run ② Generate SYS Reqs or ③ Generate SWR Reqs without first running ① Extract STK Needs, or the extraction step failed silently.

Fix:

  • Always run the pipeline in order: STK → SYS → SWR. The "Run Full Pipeline" button enforces this automatically.
  • If running individually, confirm the STK step showed a success count before proceeding.
  • Check GET /requirements/list?level=stakeholder returns entries before proceeding.
💡 Re-running a step appends new requirements — IDs increment from the current maximum (e.g. if STK-012 exists, next run starts at STK-013).
🗄 Memory Status shows "❌ Not loaded" / index missing

Cause: FAISS index files (data/faiss_index.index and data/faiss_index.meta) are absent — either first run or accidentally deleted.

Fix:

  • The system will create a new empty index automatically on startup — no action needed to fix the error itself.
  • Re-ingest all your documents to rebuild the index. The files will be persisted on the next document add.
🔒 "Write access restricted to data/ directory only"

Cause: file_write enforces a security boundary — writes are only allowed inside data/.

Fix: Change your filepath to start with data/, e.g. data/summary.txt instead of results/summary.txt.

📄 DOCX / Excel export fails or produces empty tables

DOCX fails entirely:

  • Verify python-docx is installed: pip install python-docx.
  • Check the server console for a Python traceback.

Excel fails entirely:

  • Verify openpyxl is installed: pip install openpyxl.

Export succeeds but tables are empty:

  • No requirements exist at that level yet — run the pipeline first.
  • For SyRS: must have SYS reqs. For SRS: must have SWR reqs.
💡 Exported files are saved server-side to data/requirements/exports/. The export response shows the full filepath.
🐌 Pipeline is very slow (60–120 seconds per step)

Cause: This is expected behaviour. Each pipeline step (STK, SYS, SWR) makes a full Claude API call with the entire PRD or existing requirements as context.

  • STK extraction: sends the full PRD text (up to 80k chars) to Claude.
  • SYS generation: sends all STK requirements as JSON to Claude.
  • SWR generation: sends all SYS requirements as JSON to Claude.

For a 50-page PRD generating 15 STK → 40 SYS → 80 SWR, total time is typically 2–4 minutes.

⚠ Do not refresh the page while the pipeline is running — the frontend will lose the progress indicator, but the server continues processing.
🌐 API returns 422 Unprocessable Entity

Cause: The request body is missing a required field or the type is wrong (e.g. sending a number where a string is expected).

Fix:

  • Visit http://localhost:8000/docs — Swagger UI shows the exact schema for every endpoint.
  • Check the 422 response body: it contains a detail array identifying exactly which field failed and why.
💥 API returns 500 Internal Server Error

Cause: An unhandled exception in a tool or the orchestrator. Common causes:

  • Missing Python dependency (pypdf, python-docx, openpyxl, sentence-transformers).
  • Corrupted data/requirements/requirements.json — delete the file and re-run the pipeline.
  • Corrupted FAISS index — delete data/faiss_index.* and re-ingest documents.
  • Expired or invalid ANTHROPIC_API_KEY in .env.

Fix: Check the server console (uvicorn output) for the full Python traceback.

🔍 Requirement ID not found (404 on GET/PATCH)

Cause: The ID passed to GET /requirements/{req_id} or PATCH /requirements/{req_id} does not exist in the store.

Fix: Use GET /requirements/list to retrieve all valid IDs first, or browse the Requirements tab list.

✏️ Query Best Practices
How to phrase queries so Prometheus retrieves the right context and gives you accurate answers.
How Prometheus finds answers: Your query is converted into a semantic vector and matched against ingested document chunks in the knowledge base. The quality of retrieval — and therefore the quality of the answer — depends directly on how the query is phrased.
The Golden Rules
Rule ❌ Avoid ✅ Better
Use positive framing "Range Low is not triggered at 20 km" "Range Low notification trigger threshold"
Name the subject precisely "What are the notification requirements?" "VIC Range Low notification entry conditions"
Describe symptoms, not conclusions "The feature is broken" "Expected behavior when vehicle range is below 20 km"
One question at a time "List GNSS specs, Bluetooth specs, and Wi-Fi bands" "What are the GNSS sensitivity requirements?"
Use domain terminology "When does the low battery popup show?" "VIC Range Critical notification display condition"
Reference IDs when known "Tell me about the range notification" "Show me VIC_Not_125 through VIC_Not_130"
Proven Query Patterns
🔍 Looking up a specific requirement or specification

Use the exact requirement name, signal name, or ID from the document. Prometheus will retrieve the closest matching chunks.

"What is the entry condition for the Range Low notification?"
"GNSS sensitivity specification in the VIC hardware guide"
"Show requirements VIC_Not_125 to VIC_Not_130"
⚙️ Debugging a field issue or unexpected behaviour

State the observed symptom in positive, technical terms. Avoid negations ("not working", "not triggered") — phrase what you are trying to find, not what is absent.

✅ "Range Low notification trigger condition and auto-dismiss timeout"
✅ "CAN signal for estimated range in the VIC BMS interface"
❌ "Why is the Range Low notification not triggering at 20 km?"
The search engine embeds your words semantically. Negations ("not triggered") cause the vector to match unrelated "not present" or "not found" text instead of the actual specification.
📊 Comparing two features or specifications

State both subjects explicitly with the attribute you want compared.

"Compare Range Low and Range Critical notification threshold and auto-dismiss behaviour"
"Difference between STK and SYS requirements for Wi-Fi connectivity"
📄 Asking about a specific document or revision

Name the document or version when multiple sources have been ingested. This guides retrieval to the right chunks.

"According to the Matter PRD VIC V1.5, what is the GNSS accuracy requirement?"
"In the HMI wireframe Phase 2, what screen appears after dismissing a notification?"
"Agenewtech hardware design guide — UART configuration parameters"
📋 Asking for a list or summary

Scope your list request to a specific feature area or document section. Broad list requests retrieve chunks that may span unrelated topics.

✅ "List all VIC notification types and their trigger conditions"
✅ "Summarise the connectivity requirements in the VIC PRD"
❌ "List all requirements"
Role-Specific Query Style
Role Response style Recommended query style
SERVICE Symptom classification → 3 root causes → field action Describe the observable symptom in field-service language. State the unit, the observed behaviour, and any error codes visible.
SYSTEM Subsystem analysis → interface trace → spec deviations Name the subsystem, the interface, and the signal or message. Reference the specification section if known.
DEVELOPER Code-level root cause → register/signal state → fix Include register names, function names, CAN message IDs, or firmware version where relevant.
Quick Reference — Query Anatomy
A well-formed query has up to four parts:
Subject + Attribute + Context + Action

Example:
Range Low notification + trigger threshold and auto-dismiss timeout + VIC PRD V1.5 + compare with Range Critical
Not all four parts are required — even just Subject + Attribute is enough for most queries.
📋 Requirement Generation Guide
How to use the ASPICE 4.0 requirements pipeline end-to-end — from PRD ingestion to exported specification documents.
Step-by-Step Workflow
1

Prepare your PRD

Place the document in the data/ folder. Supported formats: PDF, DOCX, TXT. Numbered headings and clear section titles improve extraction quality significantly.

2

Open the Requirements tab → Enter PRD path

Type the relative path in the PRD field (e.g. data/PCE_Product_Requirements_Document.docx). Do not use a leading /.

3

Click "① Extract STK Needs"

Claude reads the PRD and extracts discrete stakeholder needs. Each need is assigned a STK-NNN ID and saved to the store. Typical PRD → 8–20 needs.

4

Click "② Generate SYS Reqs"

Claude generates testable, implementation-independent system requirements from the stored STK needs. Uses "shall" language. Each SYS req is linked to its source STK IDs. Typical ratio: 2–4 SYS per STK.

5

Click "③ Generate SWR Reqs"

Claude decomposes each SYS requirement into specific software behaviours, APIs, and data handling rules. Each SWR is linked back to its SYS source. Typical ratio: 2–3 SWR per SYS.

6

Review, filter, and edit requirements

Use the filter bar to browse by level, status, and priority. Click any requirement to see full details. Update status (Draft → Agreed) and add notes directly in the detail panel.

7

Open Traceability Matrix

Click "▼ Show" on the Traceability Matrix panel. Coverage bars show the percentage of STK covered by SYS, and SYS covered by SWR. Orphan IDs are flagged for attention.

8

Export documents

Use the Export buttons: SyRS (DOCX) for system-level spec, SRS (DOCX) for software-level spec, Traceability (Excel) for the full bidirectional matrix. Files are saved to data/requirements/exports/.

Understanding Requirement Levels
LevelID PrefixProcessLanguageFocus
StakeholderSTK-NNNASPICE SYS.1User/business termsWhat stakeholders need — not system behaviour
SystemSYS-NNNASPICE SYS.2"The system shall…"What the system must do — implementation-independent
SoftwareSWR-NNNASPICE SWE.1"The software shall…"Software behaviour, APIs, algorithms — no hardware
Status Lifecycle
StatusMeaningWho sets it
DraftAuto-assigned on creation; awaiting reviewSystem (automatic)
AgreedReviewed and approved by stakeholdersEngineer (manual via PATCH or UI)
ImplementedDevelopment completeEngineer
VerifiedTest evidence existsTest team
Export Formats Reference
ExportFormatContentsOutput File
SyRSDOCXDocument Control → STK Summary → SYS Requirements Table (ID/Title/Description/Rationale/Priority/Status/Verification/Derived From) → Traceability AppendixSyRS_v1.docx
SRSDOCXSame structure but SYS Summary → SWR Requirements TableSRS_v1.docx
TraceabilityXLSXSheet 1: STK→SYS · Sheet 2: SYS→SWR · Sheet 3: Coverage SummaryTraceability.xlsx
SyRS / SRSJSONRaw requirement objects — useful for programmatic downstream processingSyRS_v1.json
Tips for Better Results
📝 Write PRDs with clear, numbered sections

The extraction prompt instructs Claude to reference PRD sections in the source field. Numbered headings (1.1, 2.3.4…) or named sections like "User Interface Requirements" give cleaner traceability than unstructured prose.

🔁 Re-running a step appends new requirements

Each pipeline step appends to the store — it does not overwrite. If you run STK extraction twice, you get two sets with non-overlapping IDs (e.g. STK-001…STK-012, then STK-013…STK-020). To start fresh, delete data/requirements/requirements.json before running.

🔗 Bidirectional links are automatic

When a SYS requirement lists derived_from: ["STK-001"], the store automatically appends the SYS ID to STK-001.allocated_to. You never need to manually maintain both directions of the link.

🔌 API Reference
All endpoints are served on http://localhost:8000. Interactive Swagger docs (try-it-out) available at http://localhost:8000/docs.
System
GET /health Liveness check
Response 200
{ "status": "ok", "memory_chunks": 1831 }
GET /memory/status FAISS index stats + recent sessions
Response 200
{ "total_chunks": 1831, "index_loaded": true, "recent_sessions": [{ "file": "session_42.json", "timestamp": "...", "query": "..." }] }
Inference
POST /query Full PCE pipeline: retrieve → plan → execute → reflect
Request Body
{ "query": "What are the GNSS sensitivity parameters?" }
Response 200
{ "answer": "Based on the retrieved context...", "errors": [], "retrieved_chunks": 5, "session_id": "session_76.json" }
Memory
POST /ingest Index a file or folder into FAISS
Request Body
{ "path": "data/PCE_Product_Requirements_Document.docx" } // or a folder: "data/VIC -PTG_Files"
Response 200
{ "success": true, "message": "Ingested 'PRD': 42 section(s)", "chunks_added": 42, "total_chunks": 1873, "details": {} }
Requirements (ASPICE 4.0)
POST /requirements/generate Run ASPICE pipeline (one or all levels)
Request Body
{ "prd_path": "data/PRD.docx", "levels": ["stakeholder", "system", "software"] } // omit levels you don't want to run
Response 200
{ "success": true, "message": "Pipeline completed successfully.", "counts": { "stakeholder": 12, "system": 34, "software": 78 }, "errors": [] }
GET /requirements/list Filtered requirement list
Query Params (all optional)
level=stakeholder|system|software status=draft|agreed|implemented|verified priority=high|medium|low Example: /requirements/list?level=system&status=draft
Response 200
[ { "id": "SYS-001", "req_level": "system", "title": "Battery SOC Display", "description": "The system shall...", "derived_from": ["STK-002"], "allocated_to": ["SWR-005"], ... } ]
GET /requirements/traceability Full STK↔SYS↔SWR matrix with coverage
Response 200
{ "stakeholder_needs": [...], "system_requirements": [...], "software_requirements": [...], "links": [{ "from": "STK-001", "to": "SYS-001", "level": "STK→SYS" }], "coverage": { "stakeholder": { "total": 12, "covered": 12, "orphans": [] }, ... } }
POST /requirements/export Generate SyRS / SRS / Traceability document
Request Body
{ "doc_type": "syrs", // syrs | srs | traceability "format": "docx" // docx | xlsx | json | markdown }
Response 200
{ "filepath": "data/requirements/exports/SyRS_v1.docx", "doc_type": "syrs", "format": "docx" }
GET /requirements/{req_id} Single requirement by ID
Path Param
req_id = "STK-001" | "SYS-003" | "SWR-012"
Response 404
{ "detail": "Requirement SYS-099 not found." }
PATCH /requirements/{req_id} Partial update — any field is optional
Request Body (all optional)
{ "status": "agreed", "notes": "Reviewed in sprint 3", "priority": "high", "title": "...", "description": "...", "rationale": "...", "verification_method": "test", "tags": ["sprint-3", "safety"] }
Response 200
// Returns full updated requirement object { "id": "SYS-003", "status": "agreed", "updated_at": "2026-02-19T14:32:00+00:00", ... }
🛠 Tool Reference
Every tool Prometheus can invoke. The planner selects tools automatically; you can also call them via the API or natural-language commands.
Core Tools
echo Core

Debug/test tool. Echoes a message back through the executor. Useful for verifying that the planner-executor pipeline is running correctly.

Args {"message": "string"} Returns {"result": "ECHO RESULT: ..."}
file_read Core

Read any text file on the server. Useful for loading previously written outputs, config files, or data files.

Args {"filepath": "data/output.txt"} Returns {"result": "<file content>"} Note Path must exist; reads full file
file_write Core

Write a string to a file. Restricted to the data/ directory for security. Creates parent dirs automatically.

Args {"filepath": "data/out.txt", "content": "..."} Security Path traversal blocked; data/ only
document_ingest Core

Index a file or entire folder into FAISS semantic memory. Supports PDF, DOCX, TXT, JSON, CSV, XLS, XLSX. Folder mode walks recursively.

Args {"filepath": "data/doc.pdf"} Folder {"filepath": "data/my-folder/"} Returns sections_indexed count
answer_from_context Core

Synthesise a grounded answer using Claude and the retrieved FAISS chunks. The orchestrator auto-injects the context at runtime — always leave context as "" in plans.

Args {"question": "...", "context": ""} Note context injected automatically
Requirements Tools (ASPICE 4.0)
extract_stakeholder_needs ASPICE SYS.1

Reads a PRD document and uses Claude to extract discrete, atomic stakeholder needs. Saves as STK-001… to the requirements store with auto-assigned IDs.

Args {"filepath": "data/PRD.docx"} Supports PDF, DOCX, TXT, JSON Returns {"extracted": N, "requirements": [...]}
generate_system_requirements ASPICE SYS.2

Generates testable, implementation-independent system requirements from stored STK needs using Claude. Creates bidirectional STK↔SYS links automatically.

Args {"from_ids": "all"} Partial {"from_ids": ["STK-001","STK-002"]} Prereq extract_stakeholder_needs first
generate_software_requirements ASPICE SWE.1

Generates software-specific requirements from stored SYS requirements. Focuses on software behaviour, APIs, algorithms, and interfaces. Creates SYS↔SWR links.

Args {"from_ids": "all"} Partial {"from_ids": ["SYS-001"]} Prereq generate_system_requirements first
get_traceability_matrix ASPICE

Returns the full STK↔SYS↔SWR traceability matrix with bidirectional links and coverage analysis (covered vs. total, orphan detection) for each requirement level.

Args {"format": "markdown"} Also {"format": "json"} Returns table + coverage summary
export_requirements_doc ASPICE

Exports requirements to structured documents. SyRS/SRS generated as DOCX (python-docx) with tables + traceability appendix. Traceability as Excel with 3 sheets.

doc_type "syrs" | "srs" | "traceability" format "docx" | "xlsx" | "json" | "markdown" Output data/requirements/exports/
Natural Language Triggers

The planner recognises these phrases and maps them to the correct tool pipeline automatically:

Say this…Planner runs…
"ingest data/my-folder"document_ingest
"what are the GNSS sensitivity parameters?"answer_from_context (context auto-filled)
"write summary to data/out.txt"answer_from_contextfile_write
"generate ASPICE requirements from data/PRD.docx"extract_stakeholder_needsgenerate_system_requirementsgenerate_software_requirements
"show traceability matrix"get_traceability_matrix
"export SyRS as docx"export_requirements_doc
💡 Product Management Suite Philosophy
The cognitive loop that connects intent to truth
Aletheia Aletheia — Revealing Operational Truth

Aletheia is a Greek philosophical concept meaning "disclosure" or "unconcealedness" — the revelation of what is real. In the context of this platform, Aletheia is the cognitive query engine that surfaces operational truth from your system's knowledge base.

Where a typical search tool returns documents, Aletheia synthesises answers — drawing from ingested engineering data, test reports, field logs, and technical specifications to produce grounded, role-aware responses. It doesn't guess. It retrieves, reasons, and responds with the precision of a senior engineer who has read every document in the system.

Aletheia → Reveals the operational truth
Field questions answered. Diagnostics grounded. Engineering context delivered at the right role, at the right depth.
Veridion Veridion — Defining Structured Intent

Veridion feels like it was forged in a Roman engineering lab and then upgraded with enterprise SaaS discipline.

It's built from veritas — Latin for truth. Not mystical truth. Structured truth. Documented truth. Signed-off truth. The kind auditors respect and engineers rely on.

A requirement management tool is not creative software. It is institutional memory. It is the system that prevents future arguments by preserving present clarity.

Veridion sounds
  • Authoritative
  • Stable
  • Enterprise-ready
  • Technically mature
What requirements do
  • Freeze intent
  • Anchor architecture
  • Control change
  • Protect scope from entropy

In complex automotive stacks — especially when juggling SYS, SW, safety, cybersecurity, and OTA constraints — the requirement tool becomes a constitutional framework. Veridion fits that role. It sounds like something that enforces structure without drama.

Names ending in "-ion" tend to feel scientific and institutional — validation, regulation, configuration. The suffix signals process and rigour. That's subconscious but powerful.

📋 Kronos — Orchestrating Implementation & Accountability

Kronos — named for the Greek embodiment of ordered time — is the orchestration layer of the Prometheus suite. Where Veridion defines what must be built and Aletheia surfaces what is happening in the field, Kronos answers: when, by whom, and in what sequence. It manages the dual rhythm of agile delivery (sprint cadence) and structured programme governance (ASPICE V-model milestones), with every task traceable back to a software requirement and forward to a generated code session.

Kronos closes the gap between intent and implementation — making the invisible work of engineering visible and accountable.

Kronos → Orchestrates the ordered sequence of delivery
Sprint cadence + ASPICE milestones. Requirement traceability. Code session linkage. One shared task pool. Two views of truth.
📚 Lekha — Institutionalising Hard-Won Knowledge

Lekha is Sanskrit for "writing" — the act of committing knowledge to permanent record. In the Prometheus suite, Lekha is the Lessons Learned module that lives inside Veridion, ensuring that failures experienced in one programme inform the requirements of the next.

Every engineering organisation accumulates hard-won knowledge — root causes painstakingly isolated, resolutions painfully discovered, preventive actions agreed in post-mortems. Without a system, this knowledge evaporates. Engineers leave. PRDs are written without consulting past failures. The same defect is rediscovered in the next programme.

Lekha solves this by sharing Aletheia's FAISS semantic index. Lessons are stored with the same vector embeddings as engineering documents — so when a new set of requirements is generated, the AI automatically retrieves relevant past failures and injects them into the generation context. Past pain becomes future precision.

What Lekha captures
  • Root cause + problem statement
  • ASPICE phase where it was found
  • Resolution + prevention actions
  • Linked requirement IDs
How it feeds Veridion
  • Auto-injected into pipeline context
  • Surfaced on each requirement card
  • Queryable via Aletheia Q&A
  • Bulk import from CSV/XLSX
Lekha → Converts past failures into future requirements
3-layer deduplication. Shared FAISS index with Aletheia. Automatic pipeline injection. Related lessons on every requirement card.
🔄 The Closed-Loop Engineering Intelligence System

Together, Aletheia, Veridion, Lekha, and Kronos form a complete engineering intelligence loop:

Veridion → Defines the intended truth
Aletheia → Reveals the operational truth
Lekha → Preserves failure knowledge for future intent
The cognitive loop
Intent Implementation Telemetry Analysis Lessons Captured Updated Intent

You're not building four tools. You're building a closed-loop engineering intelligence system — where every requirement is informed by truth and past failure, and every query is anchored by structured intent.

👥 User Management

Username Display Name Email Role Status Created Actions
Loading…

📊 Performance Stats

Time Query Rephrase Retrieval Plan Execute Reflect Revise Total
Loading…

⚙ Input Source

Select requirements
Loading SWR requirements…

🗣 Target Language

🖥 Target Platform (optional — enables Next Steps guide)

🏛 ARCHITECTURE HINTS — shapes pseudocode & code structure
Platform fields enable the Next Steps guide. Architecture hints shape pseudocode & code scaffold structure.

🏛 Project Architecture (persisted — auto-injected into code gen)

🧠 Architecture Brainstorm — design before you code

📂 Past Sessions (0)

Select requirements and click Generate Algorithm
VS Code
Microsoft Visual Studio Code
🔵
CLion
JetBrains C/C++ IDE
🟣
Eclipse CDT
Eclipse C/C++ Development
🟠
IAR
IAR Embedded Workbench
🔴
Keil MDK
ARM Keil Microcontroller Dev Kit
🟢
STM32CubeIDE
STMicroelectronics IDE
🟡
Qt Creator
Qt cross-platform IDE
🤖
Android Studio
AOSP · AAOS · Android
🔷
CCS Studio
TI Code Composer Studio 12.1
IDE
Executable
Workspace
Select an IDE to see generated files.
No activity yet.

📊 Response Feedback History

Loading…

🐛 Issue Tracker

🐛
Select an issue to view details
or report a new issue using the button above

📋 KRONOS

BACKLOG

New Task

New Sprint

New Milestone

⚙ ClickUp Integration

Connect this project to a ClickUp List for task sync.

Find it in your ClickUp list URL: /list/LIST_ID
Kronos statuses: todo, in_progress, review, done

🧪 AXIOM

⬇ Template
Total Tests
Passed
Failed / Error
Skipped
Run Trend
Last Run Results
No runs yet. Click ▶ Run Tests to start.
AI Test Plan
No plan yet. Click 🤖 Generate Plan to analyse the codebase.
Run History
No history yet.
⬆ Import Manual Test Results
Upload a CSV, Excel, or JSON file containing system / UAT test outcomes. ⬇ Download template
📂
Click to browse or drag & drop
CSV · Excel · JSON
💾 Save Test Plan Version