Thread-ly SDK Documentation
Use the Python SDK to connect repositories, generate product narratives (the Markdown and JSON output also called the Sprint Translation Report in API docs), and upload context documents from your own scripts, CI jobs, or internal tools.
Installation
pip install threadly-sdk
Or install from source:
cd sdk/ pip install -e .
Authentication
All SDK endpoints require an API key. Create one via the backend CLI (requires the Thread-ly backend repo):
cd /path/to/threadly_v2 PYTHONPATH=. python -m app.cli create-api-key --name "My Key"
Store the key securely. It starts with tly_ and is shown only once. Pass it to the client via constructor or THREADLY_API_KEY:
from threadly import ThreadlyClient
client = ThreadlyClient(
base_url="https://api.thread-ly.com",
api_key="tly_your_key_here",
)Quick Start
One-call convenience: connect repo, run full pipeline (ingest + generate product narrative):
from threadly import generate_report
result = generate_report("owner", "repo-name", days=14, api_key="tly_...")
print(result["markdown"])generate_report runs the full pipeline in one call—no prior ingestion needed. Large repos may take several minutes (default timeout: 5 min).
Full Client API
Repos
connect_repo(owner, name)— Connect a GitHub repository. Returns repo object withid.list_repos()— List all connected repos.get_repo(repo_id)— Get repo details.
Ingestion
trigger_ingest(repo_id, limit=50)— Start background ingestion. Returns{ status, task_id, message }. Pollget_job_status(task_id)for completion.get_job_status(task_id)— Check status of a background job.
Reports
get_reports(repo_id)— Return cached narrative and product narrative (sprint report field) in one call. Returns{ narrative, sprint_report, sprint_days }. Empty if ingest has not run.sprint_report(repo_id, days=14)— Generate the product narrative (Sprint Translation Report). Returns{ markdown, report }.narrative(repo_id)— Generate the continuity narrative. Returns{ markdown, doc }.
Context Documents
upload_context(repo_id, file_path, doc_type=None)— Upload a context document (PRD, sprint goals, etc.). Accepts PDF, Markdown (.md, .markdown), plain text (.txt), RST (.rst), JSON, CSV. Max 10 MB. Returns{ id, filename, doc_type, char_count, created_at }.list_context(repo_id)— List uploaded context docs.delete_context(repo_id, doc_id)— Deactivate a context document. Returns{ status, id }.
Ingestion Workflows
- sprint_report / generate_report — Run the full pipeline in one call: fetch PRs, ingest, build knowledge, generate product narrative. No prior ingestion needed.
- trigger_ingest + get_job_status — Use when you want to ingest first (e.g. for
narrativeor repeated reports). Poll untilstateisSUCCESSorFAILURE. - get_reports — After ingest completes, returns cached narrative and product narrative in one call. Fastest way to fetch both.
- narrative — Uses the knowledge base. Best results after ingestion or after a
sprint_reportrun.
Demo vs SDK
The public demo at thread-ly.com uses unauthenticated endpoints (rate-limited by IP). The SDK uses API keys for higher limits and programmatic access to connected repos.
Report tone & pipeline: Product narratives and continuity narratives use hedged language grounded in merged PRs (what changed vs likely implications vs what to verify). SDK runs the full pipeline with knowledge build plus PM rewrite—same prompts as the homepage examples. On the free tier, each product narrative uses a 14-day window and up to 150 merged PRs per run (server cap). Try Live (Generate Product Narrative) uses the same PM rewrite and effort distribution but skips knowledge build and clamps to up to 5 days and up to 10 PRs for speed.
Product narrative JSON (report)
The sprint_report response includes structured data alongside markdown. Product narratives include a change risk signals section (path and diff heuristics—not a dependency graph), grouped as sensitive surface, integration and coordination, breadth of change, and release and deployment. Each signal has category, severity (warning → [Warning] in Markdown; info → [Review]), and message. The JSON field is change_risk_signals; blast_radius is a deprecated mirror of the same list for older integrations.
Free Tier Limits
API keys start on the free tier:
- 5 reports per month — Only counted when you actually generate a product narrative (cache hits don't count)
- Time window — Sprint (14 days) only. Use
preview=1withdays=90for a sampled quarterly preview - PR coverage — Up to 150 PRs per report, no silent truncation
- General API — 20 requests per hour for list/get/cache reads
- Ingest — 10 per day
Monthly, quarterly, half-year, and yearly reports require upgrade. When exceeded, the API returns 429 or 402 (paywall).
Time Windows & Quarterly Preview
Free tier: days=14 (sprint) for full reports. For a taste of longer horizons: sprint_report(repo_id, days=90, preview=True) returns a sampled quarterly preview (50 PRs across the quarter). The report is clearly labeled as sampled.
Errors
The SDK raises requests.HTTPError on non-2xx responses. Common codes: 401 (invalid or missing API key), 402 (tier restriction / paywall), 404 (repo or document not found), 429 (rate limit exceeded).
Configuration
| Variable | Default | Description |
|---|---|---|
THREADLY_URL | http://localhost:8000 | API base URL |
THREADLY_API_KEY | (none) | API key (tly_...) for authenticated endpoints |
Self-hosting the API? Ingest speed and GitHub usage are controlled on the server (GITHUB_TOKEN, SPRINT_*, etc.), not these SDK env vars. See the Thread-ly backend repo README section Sprint ingest tuning (self-host) and .env.example.
Example Workflow
from threadly import ThreadlyClient
client = ThreadlyClient(api_key="tly_...")
# Connect repo
repo = client.connect_repo("acme", "billing-service")
# Upload sprint context
client.upload_context(repo["id"], "sprint_goals.md")
# Generate report
report = client.sprint_report(repo["id"], days=14)
print(report["markdown"])