Strategic Round II
    Complete

    READ MORE
    Chai AI logo

    CHAI: Chat + AI

    Quant traders building AI Platform
    Palo Alto, CA

    [ Daily Active Users Growth ]

    Incentives & Scale

    RESEARCH

    All platforms work best with the right incentives. At CHAI, we've tried paying developers, but the biggest motivators remain high-quality feedback, recognition, and the satisfaction of building a popular LLM. Our scale enables the critical mass of feedback and models needed to create strong feedback loops.

    Graph showing Chai DAU growth
    OCT 2022
    APR 2023
    OCT 2023
    APR 2024
    OCT 2024
    APR 2025
    NOV 2022

    CHAI Launches on App Store

    We were the first to launch a consumer AI platform, allowing users to create their own ChatAIs—ahead of Character AI and ChatGPT.

    FEB 2023

    Deploys First In-House 6B LLM

    Open-sourced LLMs no longer satisfied our users' requirements, as the LLMs needed to be adapted for social and engagement purposes. We saw a +10% engagement boost from our own in-house model.

    MAR 2023

    Deploys Best-of-4 Reward Model

    We continued to iterate on RLHF (Reinforcement Learning with Human Feedback), training a reward model directly on user signals. This led to a huge boost in our day 30 user retention.

    APR 2023

    Larger Model Upgrade - 13B Architecture

    We found that a bigger model leads to better depth, therefore better retention. We re-trained our LLM from scratch and saw another +10% engagement boost.

    MAY 2023

    PPO Model Deployed

    Using Proximal Policy Optimization, a reinforcement learning technique, we optimized our base foundation model to decrease the probability a chat session ends.

    JUNE 2023

    Deploys Reward Model XL

    Continued to scale up our reward model. Trained with 100 million signals to decrease user retry rate and increase chat session length.

    OCT 2023

    Efficient Inference & Custom GPU Orchestration

    Off-the-shelf load balancing and vLLM were no longer sufficient to support our user base at 500K DAU scale. We implemented custom CUDA kernels together with our own GPU orchestration system.

    NOV 2023

    Increased GPU Reservation

    We hit a scaling issue due to high demand from our users. We reserved an additional 1,000 A100 GPUs from our provider to scale reliably.

    NOV 2023

    Deployed Model Blending

    CHAI invented model blending—ensembling different LLMs trained on different targets at the conversation level. This outperformed GPT-3 in user retention.

    DEC 2023

    BO8 Reward Model Deployed

    With increased cluster capacity, we implemented Best-of-8 rejection sampling, utilizing our upgraded reward model to its full extent.

    MAR 2024

    DPO Model Deployed

    Utilizing Direct Preference Optimization with user preference datasets, we boosted engagement by 20%. The performance stacked well with our existing reward model.

    AUG 2024

    Upgraded All Existing Blends to DPO

    Building on the success of DPO, we iterated on optimization targets and data selection, and successfully deployed DPO across all production blends.

    SEP 2024

    13B Reward Model Deployed

    With increased GPU capacity due to cluster upgrades, we were able to serve larger reward models for all users.

    OCT 2024

    10x 24B Models Deployed

    We upgraded our existing production blend to 24B models. With blending enabled, we saw a surge in daily active users and day 30 retention.

    JAN 2025

    Model Mesh Orchestrator Deployed

    To support over 1M Daily Active Users, Model Mesh—an in-house cluster orchestration platform—was deployed to handle multi-cluster, multi-GPU-type serving of hundreds of LLMs in production.

    MAR 2025

    GRPO Deployed

    GRPO (Group Relative Policy Optimization) is an upgrade from Direct Preference Optimization, resulting in a +15% engagement improvement.

    [ Product ]

    Building Platform for Social AI

    We believe in platforms. There is huge demand for AI that is not only factually correct but also entertaining and social.

    Gradient background Gradient background Gradient background
    IOS ANDROID
    [ GPU Cluster ]

    1.4 EXAFLOPS GPU CLUSTER
    FOR AI INFERENCE

    CLUSTER

    At CHAI, we serve hundreds of in-house trained LLMs across several GPU chip types from both AMD and Nvidia. While open-source solutions such as vLLM work well for simple workloads, we've found that we can further improve upon vLLM by almost an order of magnitude through several optimizations, such as custom kernels and compute-efficient attention approximations.

    NUMBER OF GPUS
    5000 GPUs
    NUMBER OF TOKENS SERVED
    1.2T Tokens / Day
    NUMBER OF UNIQUE LLMS SERVED
    51K LLMs
    CLUSTER COMPUTE PERFORMANCE
    >1.4 Exaflops
    NVIDIA A100
    NVIDIA A100
    NVIDIA L40S
    NVIDIA L40S
    AMD Mi325x
    AMD Mi325x
    AMD Mi300x
    AMD Mi300x

    Current openings

    JOBS

    Who we are

    NEWS
    主站蜘蛛池模板: 亚洲啪啪综合AV一区| 99精品国产高清一区二区三区| 无码午夜人妻一区二区不卡视频| 国产精品一区电影| 亚洲国产日韩在线一区| 国产日韩一区二区三免费高清| 视频在线观看一区二区| 精品一区二区三区四区在线播放| 亚洲线精品一区二区三区| 国产一区二区三区福利| 国产视频一区在线播放| 亚洲精品国产suv一区88| 亚洲美女一区二区三区| 无码国产精品一区二区免费16| 中文字幕永久一区二区三区在线观看 | 亚洲午夜在线一区| 无码人妻久久久一区二区三区| 中文字幕在线观看一区二区| 一区二区高清视频在线观看| 视频在线观看一区二区| 无码精品久久一区二区三区 | 农村乱人伦一区二区| 国产精品日本一区二区在线播放 | 日韩精品一区在线| 八戒久久精品一区二区三区| 国产suv精品一区二区6| 福利一区在线视频| 国产AⅤ精品一区二区三区久久 | 国产乱人伦精品一区二区| 国产99视频精品一区| 夜精品a一区二区三区| 国产成人久久一区二区三区| 男人的天堂亚洲一区二区三区| 精品乱码一区内射人妻无码| 亚洲一区无码精品色| 国产日韩一区二区三区在线观看| 亚洲制服中文字幕第一区| 国产乱人伦精品一区二区| 国产一区二区精品在线观看| 久久国产精品一区| 日韩人妻精品一区二区三区视频 |