Leaderboard/QueueSim
MCP ServerScored via MCP protocol probing: initialize handshake, tools/list conformance, and ping + tool invocation performance.

QueueSim

Run M/M/c queue simulations and four scenarios (call center, ER, coffee shop, single server).

85/100
Operational Score
Score Breakdown
Availability30/30
Conformance30/30
Performance25/40
Key Metrics
Uptime 30d
100.0%
P95 Latency
520.2ms
Conformance
Pass
Trend
Stable
What's Being Tested
Availability
HTTP health check to the service endpoint
Responded with HTTP 405 in 156ms
Conformance
MCP initialize handshake + tools/list
Valid MCP server info returned, tools/list responded
Performance
MCP ping + zero-arg tool invocation benchmarking
P95 latency: 520ms, task completion: 100%
Skills
simulate_mmc

Run a generic M/M/c queue simulation. Provide an arrival rate (λ, arrivals/hour), a service rate per server (μ, customers/hour each server can finish), and a server count (c). Optional: distribution shapes, service coefficient of variation, run length. Returns per-hour metrics and an overall summary (avg wait, queue length, offered load, throughput). This is the primary tool for 'how many servers do I need?' / 'what's my average wait?' style questions. ALSO preferred over simulate_scenario for what-if questions about scheduled scenarios (Coffee Shop, ER) when the user wants flat uniform numbers — pull the peak params from describe_scenario and run them here. That usually matches user intent better than collapsing a schedule.

list_scenarios

List the four pre-built QueueSim scenarios. Returns key, title, and one-line description for each (Single Server, Coffee Shop, ER Waiting Room, Call Center). Call this when the user's problem matches one of the preset shapes — use describe_scenario for more detail and simulate_scenario to run one.

describe_scenario

Return full details for one preset scenario: title, description, teaching note, peak parameters, and per-hour arrival + staffing arrays. Use this before simulate_scenario to understand the default shape and what overrides make sense.

simulate_scenario

Run one of the four preset scenarios (single, coffee, er, callcenter) with optional overrides. Overrides apply UNIFORMLY across open hours — e.g. setting servers=5 on 'coffee' replaces the 4/6/4 staffing pattern with a flat 5 during open hours (closed hours stay at zero). Use this for (a) faithful reproduction of a scenario's defaults, or (b) uniform scaling (everywhere it was open, use these new numbers). Do NOT use this when the user wants to keep a scheduled scenario's shape but tweak just one part — there's no per-hour override here, and collapsing a 4/6/4 pattern to 5 often isn't what the user meant. For flat what-if analysis on scheduled scenarios, prefer simulate_mmc using peak params from describe_scenario.

explain_queueing_theory

Return a ~500-word educational explainer of M/M/c queueing theory: Little's Law, utilization, why averages mislead, how simulation relates to Erlang-C. No inputs. Use this when the user asks a conceptual 'why' or 'how does this work' question rather than asking for a number.

explain_advanced_patterns

Return a textbook-level description of six queueing complexity patterns beyond basic M/M/c: abandonment/reneging, priority tiers, overflow routing, skills-based routing, compound service, and server outages. Use this when the user describes real-world complexity (customers hanging up, VIP queues, specialist escalation, agent breaks, transfers) that plain M/M/c doesn't model. The tool frames each pattern conceptually and points users at ChiAha for custom modeling.

recommend_staffing

INVERSE of simulate_mmc — given an arrival rate, service rate, and a target average wait time, returns the SMALLEST number of servers needed to meet the target. Use this when the user asks 'how many servers do I need?' / 'what staffing keeps wait under N minutes?'. The tool runs a binary search over candidate server counts (up to maxServers, default 50), invoking the simulator for each candidate. Saves Claude from iterating simulate_mmc 3-5 times by hand. If even maxServers servers can't meet the target, the recommendation is null and the response includes the achieved wait so Claude can explain that the target is infeasible at the given load.

compare_scenarios

Run two M/M/c configurations and return their summaries side-by-side with a delta object. Use this for clean before/after comparisons — 'what does adding 1 server do?' / 'how does the wait change if service speeds up?'. Eliminates the LLM-side pattern of calling simulate_mmc twice and computing the delta inline; one call returns both runs and the deltas already calculated. Provide scenarioA and scenarioB as MMC inputs (same shape as simulate_mmc); optionally include human labels for each so the response echoes them back.

interpret_result

Given an M/M/c configuration (arrivalRate, serviceRate, servers) and optionally an observed average wait, returns a queueing-theory framed interpretation: where you sit on the utilization curve, what ρ means in plain language, what one more or fewer server would qualitatively do, and which complexity factors (priority, abandonment, skills routing) might be hiding in real data the M/M/c model can't see. Use this to TEACH while answering — when the user wants context around a number, not just the number itself. Pure text computation, no simulation, no RNG — deterministic output.

Tools
9 tools verified via live probe
verified 2d ago
Server: queuesim-v1Version: 1.0.1Protocol: 2025-06-18
Recent Probe Results
TimestampStatusLatencyConformance
Apr 28, 2026success156.3msPass
Apr 28, 2026success520.2msPass
Apr 27, 2026success174.7msPass
Source Registries
mcp-registry
First Seen
Apr 26, 2026
Last Seen
Apr 27, 2026
Last Probed
Apr 28, 2026