Built for developers who need reliable, auditable, and fast sentiment analysis — not black-box scores.
AFINN for speed, multiple Ollama cloud models, Claude for accuracy. Pick one or compare all.
Finnhub, AlphaVantage, Google News, Yahoo. Merge and deduplicate with source=all.
L1 memory + L2 Supabase. Same article never re-scored. Cached responses under 100ms.
Simple GET/POST endpoints. Full Swagger docs. Ready for RapidAPI distribution.
Every score is auditable. Every engine follows the same calibration standard. No black boxes.
Not just Positive/Negative. Every engine maps to the same unified scale: Very Positive → Very Negative. Compare Claude vs Ollama vs AFINN apples-to-apples.
LLMs extract facts, assess each one, then synthesize a score. Every result includes a reasoning string explaining WHY — not just a number.
Every analysis includes a 0-1 confidence rating. High confidence = multiple clear signals. Low confidence = conflicting or limited data. Know when to trust.
Outputs are range-checked and tier-validated. If a model says "Very Positive" but outputs +0.35, we catch the miscalibration before it reaches you.
Polarity is just the start. We also score subjectivity, urgency, credibility, and market impact — because a Bloomberg breaking story is not the same as a blog rumor.
Same article + same engine = same result, always. We store every analysis in our database. If it's been scored before, we serve the cached result — no wasted tokens, no drift.