Home
News
Business Stories AI Technology Travel Visa Asia Business Registration Telecommunication Medical Services
About Us
Home News AI Technology DeepSeek-V3.5: 671B MoE Model Surpasses GPT-5.2 on Chinese & English Long-Context Benchmarks

DeepSeek-V3.5: 671B MoE Model Surpasses GPT-5.2 on Chinese & English Long-Context Benchmarks

444    2026-02-16

DeepSeek open-sourced V3.5 (671B MoE), setting new state-of-the-art on 1M+ token long-context Chinese and English benchmarks, with native tool-calling and improved multilingual reasoning, making it one of the strongest open-weight models for enterprise long-document processing.

Previous article
OpenAI o4-mini Pro Launches with Breakthrough Chain-of-Verification Reasoning
Next article
It’s 2026 — why are most companies just “pretending to use AI”?
new
It’s 2026 — why are most companies just “pretending to use AI”? Big tech giant events + sharp one-liner commentary Samsung Profit Triples Thanks to Insane AI Memory Demand! NVIDIA GTC 2026 Drops $1 Trillion Order Bomb + Vera Rubin Reveal! Meta Just Snatched Up Viral AI Agent Social Network Moltbook! Morgan Stanley Predicts Major AI Breakthrough in H1 2026 Musk's xAI Completes Deep Integration with X Platform, Launches "Super App" Payment Feature Ford Announces 20,000 Job Cuts to Accelerate EV and AI Investment Meta Delays Avocado AI Model Rollout to May, May License Google's Gemini OpenAI Reportedly Integrating Sora 2 into ChatGPT
Email subscription
About
Navigation
News
©bizyet.com