InclusionAI
| inclusionAI | |
|---|---|
| Type | Research division |
| Industry | Artificial intelligence |
| Founded | 2024
|
| Headquarters | Hangzhou, China |
| Key people | He Zhengyu (CTO of Ant Group) |
| Parent | Ant Group |
| Owner | Ant Group |
| Products | Large Language Models Reinforcement Learning systems AGI frameworks Multimodal models
|
| Website | inclusionai.github.io |
inclusionAI is an artificial general intelligence (AGI) research initiative established by Ant Group, focused on developing and open-sourcing advanced artificial intelligence systems.[1] The initiative represents Ant Group's dedicated effort to work towards AGI through the development of Large Language Models (LLMs), Reinforcement Learning (RL) systems, multimodal models, and other AI-related frameworks and applications.[2] inclusionAI describes itself as a hub for open projects from Ant Group's research teams working toward reproducible and community-driven AI systems, with a stated mission to develop a fully open-sourced AI ecosystem.[3]
Overview
inclusionAI operates as the primary vehicle for Ant Group's artificial general intelligence ambitions, maintaining a strong commitment to open-source principles and collaborative development.[4] The organization develops and releases various AI models and tools designed to advance the field of AGI while ensuring accessibility and inclusivity in AI development.[5]
The initiative is guided by principles of fairness, transparency, and collaboration, with a focus on tools for training and evaluating reasoning-oriented LLMs via RL, agent frameworks, and the release of trained model checkpoints when feasible.[1] This aligns with Ant Group's broader "AI First" corporate strategy announced in 2024.[6] Public materials indicate that inclusionAI maintains repositories on GitHub and model artifacts on Hugging Face, and has presented work at venues such as ICLR 2025 Expo.[3]
History
inclusionAI emerged as part of Ant Group's increased focus on artificial intelligence research and development. The initiative became prominently active in 2024-2025 with the release of multiple open-source models and frameworks.[1] This move aligned with Ant Group's "Plan A" recruitment initiative, launched in April 2025, which aimed to attract top AI talents and ramp up innovation efforts.[6][7]
By May 2025, Ant Group publicly showcased its elite AI researchers, including figures like He Zhengyu, a PhD graduate from the Georgia Institute of Technology known for developing advanced algorithms.[8] Public references to inclusionAI as a named project appear in 2025 in connection with an ICLR Expo session highlighting its open RL training stack and agent work.[3]
In March 2025, Ant Group announced the open-sourcing of the Ling Mixture of Experts (MoE) Large Language Models under the inclusionAI umbrella, marking a significant milestone in the initiative's development.[9] This was followed by the release of Ling-Plus and Ling-Lite models, which demonstrated the ability to train large-scale models on domestically produced Chinese chips from Alibaba and Huawei.[10]
inclusionAI's projects began appearing on platforms like GitHub and Hugging Face in mid-to-late 2025, with releases such as the Inclusion Arena leaderboard in August 2025.[11] In September 2025, the organization began open-sourcing Ling 2.0, a series of MoE architecture LLMs, with Ling-mini-2.0 as the first released version.[12] On September 30, 2025, the organization released Ring-1T-preview, a trillion-parameter reasoning model.[13]
Products and Models
Large Language Models
inclusionAI has developed multiple families of LLMs with a focus on efficiency, reasoning capabilities, and multimodal processing:
| Model Family | Description | Key Features |
|---|---|---|
| Ling Series | Foundation LLMs with MoE architecture | |
| Ring Series | Reasoning-focused LLMs | |
| Ming Series | Multimodal LLMs |
Ring-1T-preview
Ring-1T-preview is a preview checkpoint of a trillion-parameter "thinking" model released in late September 2025 on Hugging Face.[17] The model features a MoE architecture and was positioned to facilitate early community exploration. It excels in natural language reasoning and was trained on 20 trillion tokens, achieving 92.6% on the AIME 2025 (American Invitational Mathematics Examination) math benchmark.[13] The model is optimized for tasks requiring deep thinking and long-term planning, such as code generation and complex problem-solving, and supports long-horizon problem solving.[13]
The model was fine-tuned using inclusionAI's custom RLVR framework with the icepop method.[13] FP8 variants and community quantizations appeared shortly after on Hugging Face.[18][19] Third-party coverage reported Ring-1T-preview as the first open-source trillion-parameter model.[20]
Ming-Omni
Ming-Omni is an advanced open-source multimodal model capable of processing images, text, audio, and video, released in 2025.[16] The model features a comprehensive multimodal processing architecture with MoE design and modality-specific routers.[16] It supports speech and image generation, dialect understanding, voice cloning, context-aware dialogues, text-to-speech, and image editing.[16]
Ming-Omni represents a breakthrough in multimodal AI, integrating dedicated encoders for different modalities. It supports a wide range of tasks without additional fine-tuning, including generating natural speech, high-quality images, and handling dialect-specific interactions.[16] The model has been described as the first open-source model matching GPT-4o's modality support, with all code and weights publicly available.[21]
Frameworks and Tools
inclusionAI has developed several frameworks to support AGI research and development:
AReaL (Ant Reasoning RL)
AReaL is an open-source, fully asynchronous reinforcement learning training system designed for large reasoning and agentic models.[22] It decouples generation from training to improve GPU utilization and training stability, and provides details intended for full reproducibility (data, infra, and models).[22][23] The system emphasizes lightning-fast, efficient operations for training large-scale models, and was developed by the AReaL Team at Ant Group in collaboration with Tsinghua University's Institute for Interdisciplinary Information Sciences.[22]
ASearcher
ASearcher is an open-source framework for large-scale online RL training of search agents, aiming to advance "Search Intelligence" to expert-level performance.[24] The framework offers guidance to build customized agents, including integration with AReaL.[24]
AWorld
AWorld is a runtime system for building, evaluating and training general multi-agent assistance.[25] The system provides infrastructure for developing collaborative agent systems and testing their performance in various scenarios.[25]
Inclusion Arena
Inclusion Arena is a live leaderboard and open platform for evaluating large foundation models based on real-world, in-production applications, launched in August 2025.[11] The platform bridges AI-powered apps with state-of-the-art LLMs and multimodal LLMs (MLLMs).[11]
Unlike traditional lab-based benchmarks, Inclusion Arena prioritizes evaluations based on production environments to better reflect practical utility and addresses gaps in conventional evaluation methods by using production data.[26] The platform was proposed by researchers from inclusionAI and Ant Group and shifts the paradigm of model evaluation from synthetic lab benchmarks to real-world performance metrics derived from production applications.[26] The platform is live and open, inviting contributions from the AI community.[27]
ABench
ABench is a benchmark suite for evaluating AI models developed by inclusionAI.[1]
Key Repositories and Releases
| Project | Type | First Public Reference/Release | Primary Link |
|---|---|---|---|
| AReaL | RL training system for LLM reasoning | 2025 (paper + repo updates) | GitHub[22] |
| ASearcher | RL system for search agents | 2025 | GitHub[24] |
| AWorld | Multi-agent assistance runtime | 2025 | GitHub[25] |
| Ring-1T-preview | Trillion-parameter model (preview checkpoint) | September 2025 | Hugging Face[17] |
| Ming-Omni | Advanced multimodal model | 2025 | Project Page[21] |
| Inclusion Arena | Live evaluation leaderboard | August 2025 | arXiv[26] |
Technical Approach and Innovations
Open, Reproducible Systems
inclusionAI's work emphasizes (i) open, reproducible RL training pipelines for reasoning-centric LLMs; (ii) asynchronous system designs that reduce training latency bottlenecks by decoupling rollout generation from parameter updates; and (iii) releasing code, data notes, and, when feasible, model weights for community use and inspection.[22][23][1]
Cost-Efficient Training
inclusionAI has pioneered methods for training large-scale models on resource-constrained hardware. The organization reported training costs of approximately $880,000 for their Ling models, representing a 20% cost reduction compared to traditional approaches.[10] This was achieved through:
- Use of domestically produced Chinese chips from Alibaba and Huawei[9]
- Implementation of the EDiT (Elastic Distributed Training) method[14]
- FP8 mixed-precision training throughout the entire process[12]
- Novel optimization techniques for heterogeneous computing environments[14]
Open Source Commitment
All major models and frameworks developed by inclusionAI are released as open-source software, available through platforms including:
- GitHub (primary repository)[1]
- Hugging Face (model distribution)[2]
- ModelScope (Chinese platform)[4]
Research Focus Areas
inclusionAI's research spans multiple domains critical to AGI development:
- Natural language processing and understanding
- Reinforcement learning for reasoning and agent systems
- Multimodal AI combining vision, language, and speech
- Efficient training methods for resource-constrained environments
- Agent-based systems and multi-agent coordination
- Real-world evaluation and benchmarking
Collaboration and Community
The initiative actively encourages collaboration from researchers, developers, and AI enthusiasts worldwide.[1] inclusionAI maintains:
- Open-source repositories with over 2,000 projects[5]
- Active presence on developer platforms
- Integration with Ant Group's broader AI ecosystem
- Partnerships with academic institutions like Tsinghua University[22]
- Collaborations with industry researchers
Relationship to Ant Group and Ecosystem
inclusionAI sits within the wider Ant Group technology and open-source ecosystem, which spans databases, privacy computing, and AI infrastructure. As part of Ant Group, inclusionAI's work supports the parent company's broader AI initiatives, including:
- Healthcare AI applications through the AQ app[5]
- Financial services AI solutions[9]
- Integration with Alipay and other Ant Group services[12]
The models developed by inclusionAI are planned for use in industrial AI solutions across healthcare, finance, and other sectors served by Ant Group.[9] Ant Group communicates its open-source and research activities through corporate channels and events such as the INCLUSION·Conference on the Bund in Shanghai, where it shares AI initiatives and related reports.[28][29]
In September 2025, at the INCLUSION·Conference on the Bund, Ant Group highlighted its AI advancements, including open-source contributions from inclusionAI, underscoring the initiative's role in promoting trustworthy AI across industries.[30]
See also
- Ant Group
- Artificial general intelligence
- Large language model
- Mixture of experts
- Open-source artificial intelligence
- Reinforcement learning
- Multimodal learning
- GitHub
- Hugging Face
- DeepSeek
References
- ↑ 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 GitHub - inclusionAI Organization Homepage - This organization contains the series of open-source projects from Ant Group - https://github.com/inclusionAI
- ↑ 2.0 2.1 Hugging Face - inclusionAI Organization Profile - inclusionAI - home for Ant Group's AGI initiative - https://huggingface.co/inclusionAI
- ↑ 3.0 3.1 3.2 ICLR 2025 Expo listing - inclusionAI is a project at Ant Group aiming to develop fully open-sourced AI ecosystem - https://iclr.cc/virtual/2025/expo-talk-panel/37442
- ↑ 4.0 4.1 4.2 4.3 inclusionAI Official Website - https://inclusionai.github.io/
- ↑ 5.0 5.1 5.2 Ant Group 2024 Sustainability Report Highlights AI-Powered Digital Inclusion and New Initiatives From 3 Independent Units - https://www.businesswire.com/news/home/20250629805132/en/
- ↑ 6.0 6.1 Ant Group Unveils New Recruitment Initiative for Top AI Talents, Ramping Up AI Innovation Efforts - https://www.businesswire.com/news/home/20250425203965/en/
- ↑ Ant Group launches AI hiring drive with top researchers - Tech in Asia - https://www.techinasia.com/news/ant-group-launches-ai-hiring-drive-with-top-researchers
- ↑ Ant Group showcases its top AI researchers in bid to woo graduates in tight talent market - South China Morning Post - https://www.scmp.com/tech/big-tech/article/3308681/ant-group-showcases-its-top-ai-researchers-bid-woo-graduates-tight-talent-market
- ↑ 9.0 9.1 9.2 9.3 Jack Ma-backed Ant touts AI breakthrough on Chinese chips - Fortune - https://fortune.com/asia/2025/03/24/jack-ma-backed-ant-ai-breakthrough-chinese-chips/
- ↑ 10.0 10.1 Ant Group boasts of breakthrough with new fast, cheap Chinese AI models - Sherwood News - https://sherwood.news/tech/ant-group-boasts-of-breakthrough-with-new-fast-cheap-chinese-ai-models/
- ↑ 11.0 11.1 11.2 Stop benchmarking in the lab: Inclusion Arena shows how LLMs perform in production - VentureBeat - https://venturebeat.com/ai/stop-benchmarking-in-the-lab-inclusion-arena-shows-how-llms-perform-in-production
- ↑ 12.0 12.1 12.2 12.3 Ling-mini-2.0: Mini-Sized, Maximum Efficiency - Medium - https://ant-ling.medium.com/ling-mini-2-0-mini-sized-maximum-efficiency-1851936a9034
- ↑ 13.0 13.1 13.2 13.3 13.4 Ant Group Open-Sources Ring-1T-preview, a Trillion-Parameter Reasoning Model - Pandaily - https://pandaily.com/ant-group-open-sources-ring-1-t-preview-a-trillion-parameter-reasoning-model
- ↑ 14.0 14.1 14.2 14.3 Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without premium GPUs - arXiv - https://arxiv.org/html/2503.05139v2
- ↑ GitHub - inclusionAI/Ming: Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM - https://github.com/inclusionAI/Ming
- ↑ 16.0 16.1 16.2 16.3 16.4 Ant Group and inclusionAI Jointly Launch Ming-Omni: The First Open Source Multi-modal GPT-4o - AIBase - https://news.aibase.com/news/18921
- ↑ 17.0 17.1 Hugging Face model card: inclusionAI/Ring-1T-preview - https://huggingface.co/inclusionAI/Ring-1T-preview
- ↑ Hugging Face: inclusionAI/Ring-1T-preview-FP8 - https://huggingface.co/inclusionAI/Ring-1T-preview-FP8
- ↑ Hugging Face models index (quantized variants derived from inclusionAI/Ring-1T-preview) - https://huggingface.co/models?other=base_model%3Aquantized%3AinclusionAI%2FRing-1T-preview
- ↑ Tech in Asia (Sep 30, 2025): Ant Group launches trillion-parameter open-source model Ring-1T-preview - https://www.techinasia.com/news/ant-group-launches-trillionparameter-opensource-model
- ↑ 21.0 21.1 Ming-Omni Project Page - https://lucaria-academy.github.io/Ming-Omni/
- ↑ 22.0 22.1 22.2 22.3 22.4 22.5 GitHub - inclusionAI/AReaL: Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible - https://github.com/inclusionAI/AReaL
- ↑ 23.0 23.1 AReaL: A Large-Scale Asynchronous Reinforcement Learning System for Large Reasoning Models - arXiv - https://arxiv.org/html/2505.24298v2
- ↑ 24.0 24.1 24.2 ASearcher GitHub repository: An Open-Source Large-Scale RL Training Framework for Search Agents - https://github.com/inclusionAI/ASearcher
- ↑ 25.0 25.1 25.2 GitHub - inclusionAI/AWorld: Build, evaluate and train General Multi-Agent Assistance with ease - https://github.com/inclusionAI/AWorld
- ↑ 26.0 26.1 26.2 Inclusion Arena: An Open Platform for Evaluating Large Foundation Models - arXiv - https://arxiv.org/html/2508.11452v2
- ↑ Researchers Propose New LLM Leaderboard Based on Real-World Data - DevX - https://www.devx.com/daily-news/researchers-propose-new-llm-leaderboard-based-on-real-world-data/
- ↑ Ant Group Technology page - overview of AI and open-source footprint - https://www.antgroup.com/en/technology/
- ↑ INCLUSION·Conference on the Bund - official site - https://www.inclusionconf.com/en
- ↑ Ant Group Opensource Releases the 2025 Global Large Model - AIBase - https://www.aibase.com/news/21272