Evidence first
Every claim we make about a paper or a stack is traceable. If you cannot quote it, you cannot say it. Our pipeline literally enforces this on auto-published critiques.
We exist because too much LLM and graph-AI work ships without anyone asking the boring evaluation questions. We are small on purpose; depth beats coverage.
Every claim we make about a paper or a stack is traceable. If you cannot quote it, you cannot say it. Our pipeline literally enforces this on auto-published critiques.
When we are wrong, the retraction is on the same page as the original entry, with strikethrough and a note. The retraction log is open.
The deliverable is usually code or data, not a slideshow. Our open tools (toslop, GraphSlop, the tracker) are how we keep ourselves honest.
The research pipeline runs against a local model. We pay the cost of running our own inference because it changes what we can publish openly.
If you are evaluating us for an engagement, the most efficient sequence: