BY:SpaceEyeNews.
China just flipped on something that sounds like science fiction, but targets a very practical problem: how to make far-apart data centers behave like one machine. According to reporting summarized by the South China Morning Post and echoed by Interesting Engineering, the Future Network Test Facility (FNTF) entered operation on December 3, 2025. South China Morning Post+1
The headline claim is the part that grabs attention. China’s team says the linked computing “pool” stretches about 2,000 km (1,243 miles) and can still deliver around 98% of the efficiency of a single unified data-center cluster. South China Morning Post+1 If that number holds up in real workloads, it changes the math of AI training and real-time services across a whole country.
This SpaceEyeNews breakdown focuses on what the China giant computer network is, why “98% efficiency” matters, and the specific reality checks that will decide whether this becomes everyday infrastructure or a one-off demo.
China giant computer network goes live: what happened on December 3
The FNTF is not a single building. Think of it as a high-speed optical backbone that links multiple computing centers across many cities so they can share work and stay synchronized. Interesting Engineering describes it as a distributed AI-computing pool that can operate almost like “a single giant computer.” Interesting Engineering+1
The SCMP report adds several scale markers that help you visualize the system. It says the facility spans 40 cities and includes 55,000+ km of optical transmission. South China Morning Post That is enough fiber length to wrap around Earth more than once. The same reporting says the platform can simultaneously support 128 heterogeneous networks and run 4,096 service trials in parallel. South China Morning Post
Data -Center:

How the Future Network Test Facility works: “deterministic” networking
The most important technical phrase in the reporting is “deterministic network.” The project’s chief director, Liu Yunjie of the Chinese Academy of Engineering, says the network delivers predictable, stable transmission for workloads with extreme real-time demands, including large AI model training and telemedicine. South China Morning Post+1
Here’s why that matters.
When you train a large AI model across many processors, the chips do a ton of parallel math. Then they must share updates. If one part of the system lags, the others wait. In a single data center, the waiting stays manageable because the network stays fast and consistent.
Stretch the same training job across multiple cities and things usually break down. Latency rises. Network jitter becomes the enemy. Small timing fluctuations snowball into wasted compute.
What “98% efficiency” implies
The claim of ~98% efficiency means the system loses very little performance to coordination overhead, even across long distances. SCMP frames this as “almost as efficiently as a single giant computer.” South China Morning Post In practice, that would mean the network can keep synchronized training and real-time services humming without constant stalls.
No one should treat 98% like a guaranteed everyday number. Still, it signals the design goal: make distance feel short.
Why “98% efficiency” matters for AI training cycles
The reporting provides a concrete example. Liu describes training a model with hundreds of billions of parameters. He says such training can require over 500,000 iterations. On this deterministic network, he claims each iteration can take about 16 seconds. Without that capability, each iteration could take more than 20 seconds longer, which could add months to the training cycle. Interesting Engineering+1
Those numbers matter because they show the compounding effect.
A single iteration difference looks minor. Multiply it by 500,000 and you get a calendar-level change. That is the difference between “a model update in weeks” and “a model update in a season.”
Faster training is not just about bragging rights
AI teams care about three things at once:
- Time to train (how quickly you can ship a model).
- Cost to train (how many expensive GPU-hours you burn).
- Iteration speed (how quickly you can test improvements).
If you can pool computing across regions and keep efficiency high, you can spread workloads to where energy is cheaper, where there is spare capacity, or where regulations support large builds. That is where this story connects to China’s larger national infrastructure plan.
China giant computer network meets “East Data, West Computing”
China has pushed to rebalance where data centers live. The idea often shows up as “East Data, West Computing,” where energy-rich regions in the west host large computing hubs and the more populous east consumes those services.
An official English-language release from China’s National Development and Reform Commission (NDRC) described this strategy in 2022, noting approval of eight national computing hubs and 10 national data-center clusters.
A separate Reuters report from 2024 cited China’s National Data Administration head saying China had invested 43.5 billion yuan in the project and that the hubs had attracted more than 200 billion yuan in investment. Reuters China’s central government English site also carried a summary of those investment figures. english.www.gov.cn
So where does the FNTF fit?
The FNTF provides the missing glue. Data centers in energy-rich regions do not help much if the network cannot deliver low-latency, predictable service to the places that need it. A China giant computer network aims to turn “remote capacity” into “usable capacity.”
The “national compute grid” idea
In the simplest form, China wants to treat computing like a utility. Users should be able to request resources. The platform should schedule work across regions. Ideally, the user should not need to care where the physical servers sit.
That is the promise. It also runs into hard constraints.
The reality check: utilization, latency, and mixed hardware
If you follow China’s data-center buildout, you have heard a second theme: oversupply.
Reuters reported in July 2025 that many government-backed data centers built during the boom ran at only 20–30% utilization. Reuters In the same report, Reuters said China planned a national network and cloud platform to sell surplus computing power and organize scheduling across data centers. Reuters
That context matters for the FNTF story. China does not only need more computing. It needs better use of the computing it already built.
Challenge 1: real workloads are messy
Benchmarks love stable conditions. Real services change minute to minute. AI training can involve bursts, pauses, and heavy communication phases. Industrial apps can demand strict latency windows. Telemedicine may require consistent performance, not peak performance.
The FNTF will earn trust only if it holds up when many different services run at once.
Challenge 2: latency sensitivity still exists
Even a deterministic network cannot break physics. A signal still takes time to travel. The system can reduce jitter and improve predictability, but some tasks remain better inside one data center.
This creates a practical outcome: the “giant computer” will likely work best as a scheduler that chooses the right job for the right distance. Some workloads will spread across cities. Others will stay local.
Challenge 3: hardware and software fragmentation
Reuters also highlighted another barrier: hardware diversity. China’s ecosystem includes data centers built around different chip families, including systems relying on Nvidia GPUs and others using domestic alternatives. Reuters Mixed hardware complicates orchestration, software stacks, and performance predictability.
In other words, building fiber between cities does not automatically make the whole country one uniform supercomputer. The software layer must do a lot of work.
What this enables beyond AI: telemedicine and industrial internet
The FNTF team points to use cases beyond AI training. Liu mentions telemedicine and the industrial internet as examples that benefit from highly reliable, real-time transmission. Interesting Engineering+1
Telemedicine: more than video calls
Telemedicine can mean many things. Sometimes it is basic consultation. Sometimes it involves high-resolution imaging, remote diagnostics, and rapid sharing of patient data between hospitals.
When the network stays stable and predictable, remote specialists can access tools without long delays. That is the promise described in the reporting. Interesting Engineering+1
Industrial internet: long-distance coordination
Factories and infrastructure systems generate data nonstop. The industrial internet vision involves analyzing that data quickly, then sending control decisions back with strict timing.
A deterministic network supports that style of coordination. It can also support simulation and optimization tasks that must run close to real time.
The “prove it” checklist for 2026
The most honest way to cover this story is to treat it like a launch, not a finish line. China activated the network. Now it needs to demonstrate performance in the open.
Here are the signals worth watching if you track the China giant computer network story in 2026:
1) Sustained efficiency under load
The headline 98% number sounds great. The key question is whether it stays high across:
- many simultaneous users,
- different workloads,
- and long time windows.
2) Practical cost per training run
If the network cuts training time, it should cut cost. The best proof will look like case studies: “same model, less time, less money.” The reporting claims lower costs and broader access in principle. Interesting Engineering+1
3) Reliability metrics
High-end services need strong uptime. Watch for published reliability targets, outage reports, and recovery performance.
4) Scheduling across a national platform
Reuters’ July 2025 report describes China’s plan for unified scheduling and a state-run cloud platform to sell surplus computing power. Reuters The FNTF could become an enabling layer for that goal, but only if it integrates with real procurement and real customers.
5) Energy and sustainability pressure
Compute expansion always runs into electricity demand. China has explored energy-efficient designs in parallel. For example, Wired reported on a wind-powered undersea data center project that aims to cut cooling energy and improve PUE. WIRED
That does not directly validate the FNTF. Still, it shows the direction of travel: China wants more compute without letting energy costs explode.
Conclusion: why the China giant computer network matters
The FNTF activation matters because it targets the bottleneck that most people ignore. Chips get faster every year. Networking limits the scale.
By linking computing centers over roughly 1,243 miles and claiming ~98% single-cluster efficiency, China signals a push toward “compute as national infrastructure,” not “compute as isolated buildings.” South China Morning Post+1 The system also aligns with “East Data, West Computing,” which aims to turn energy-rich regions into usable computing hubs.
Still, the next chapter will decide the story. Sustained performance, real customers, high utilization, and stable costs will prove whether the China giant computer network becomes a daily engine for AI and real-time services.
If those proofs land, this is not just a technical milestone. It becomes a new model for how a country can “pool” its digital power.
Main sources:
- Interesting Engineering report on FNTF activation and the 1,240-mile / 98% claim. Interesting Engineering+1
- South China Morning Post coverage citing Science and Technology Daily and Liu Yunjie. South China Morning Post
- Reuters (July 2025) on data-center oversupply and the national scheduling network plan. Reuters
- Reuters (Aug 2024) and China’s central government English site on East Data, West Computing investment figures. Reuters+1
- Wired on China’s wind-powered undersea data center (energy efficiency context). WIRED