Introduction
Welcome to the Whirlagi
Access all LLMs through the standard OpenAI API format, and reduce token costs by 1/2
We do all the heavy lifting behind the scenes to serve you personalized inference:
--Support token quota details
--Support setting token-specified call models
--Support using key to query usage quotas
--Support charging by number of models
--Support designated organization access
--Support model mapping, redirect user request models
--Support calling management APIs through system access tokens
--Support token management
A full production-ready LLM flow behind 1 line of code.
Last updated