Publications

Deepseek: What it Means for Chinese and U.S. Companies’ Strategies in the AI Race

By Louise Marie Hurel, published by the Fletcher Security Review

AI models such as DeepSeek R1 can further stimulate AI innovation across the Indo-Pacific in at least two ways. First, the cost-effectiveness of DeepSeek’s R1 model training provides a benchmark against which companies in the region can further optimize and test at smaller scales through partnerships with U.S. and Chinese companies. Second, rather than being seen as national security threats, open weights models such as DeepSeek’s R1 can further stimulate innovation around other open models or innovative applications. Local and regional Large Language Models (LLMs) in Southeast Asian countries have been built on top of Big Tech LLM architectures. From 2020 to 2024, cross-regional and country-based initiatives in Malaysia, Indonesia, Vietnam, Thailand have released 35 LLMs, with 21 in 2024 alone. 

More importantly, rather than solely taking foreign technologies from China and the United States, countries in the region have sought to develop their own models as well. AI companies in the region have been harnessing the potential to develop models more sensitive to other languages and dialects. That is the case of homegrown indigenous LLMs such as AI Singapore’s SEA-LION, VinAI’s PhoGPT, Mesolitica’s MaLLaM, and India’s Sarvan AI.

Read the full article here.

(This post is republished from the Fletcher Security Review.)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.