MCP ServerScored via MCP protocol probing: initialize handshake, tools/list conformance, and ping + tool invocation performance.
threadline
Persistent memory and context layer for AI agents. inject() before your LLM call, update() after. Relevance-scored injection, grant-based access control, user-owned context. Works with OpenAI, Anthropic, Vercel AI SDK, and LangChain. < 50ms retrieval. GDPR-ready. Free tier: 2,500 calls/month.
57/100
Operational Score
Score Breakdown
Availability30/30
Conformance10/30
Performance17/40
Key Metrics
Uptime 30d
100.0%
P95 Latency
305.1ms
Conformance
Fail
Trend
Stable
What's Being Tested
Availability
HTTP health check to the service endpoint
Responded with HTTP 401 in 213ms
ConformanceNot tested
MCP initialize handshake + tools/list
Performance
MCP ping + zero-arg tool invocation benchmarking
P95 latency: 305ms, task completion: 0%
Recent Probe Results
| Timestamp | Status | Latency | Conformance |
|---|---|---|---|
| Apr 10, 2026 | success | 213.4ms | Pass |
| Apr 10, 2026 | success | 332.4ms | Pass |
| Apr 10, 2026 | success | 305.1ms | Pass |
| Apr 10, 2026 | success | 216.7ms | Pass |
| Apr 10, 2026 | success | 185.2ms | Pass |
Source Registries
smithery
First Seen
Apr 10, 2026
Last Seen
Apr 10, 2026
Last Probed
Apr 10, 2026