Taiwan Frontlines: Taiwan in the Global Arena

How AI is Shaping Taiwan's Security, Society, and Strategy

Episode Summary

AI has rapidly become a central arena of global competition, shaping everything from national security and economic power to information ecosystems and democratic resilience. The race for AI leadership now spans energy infrastructure, semiconductor supply chains, global talent, and strategic partnerships, as nations seek to build sustainable and trustworthy technological ecosystems. At the same time, AI is transforming the information domain—enabling unprecedented advances, but also amplifying disinformation, influence operations, and cognitive warfare. Our guest today is Ethan Tu, the founder of Taiwan AI Labs and a pioneering figure in Taiwan’s technology ecosystem, best known for founding PTT, Taiwan’s largest online bulletin board system. He has led groundbreaking AI research across academia, government, and industry, including work at the U.S. National Institutes of Health and as Microsoft’s Director of AI Research and Development in Asia-Pacific, where he helped shape Cortana. Today, Ethan focuses on building human-centric AI for healthcare, smart cities, and democratic resilience, while serving on keyboards and advisory bodies shaping Taiwan’s digital future. This episode was created in partnership with NOWNEWS, through its English-language platform Taiwan Current News (https://www.tcn.tw/). 

Episode Notes

AI has rapidly become a central arena of global competition, shaping everything from national security and economic power to information ecosystems and democratic resilience. The race for AI leadership now spans energy infrastructure, semiconductor supply chains, global talent, and strategic partnerships, as nations seek to build sustainable and trustworthy technological ecosystems. At the same time, AI is transforming the information domain—enabling unprecedented advances, but also amplifying disinformation, influence operations, and cognitive warfare.

Our guest today is Ethan Tu, the founder of Taiwan AI Labs and a pioneering figure in Taiwan’s technology ecosystem, best known for founding PTT, Taiwan’s largest online bulletin board system. He has led groundbreaking AI research across academia, government, and industry, including work at the U.S. National Institutes of Health and as Microsoft’s Director of AI Research and Development in Asia-Pacific, where he helped shape Cortana. Today, Ethan focuses on building human-centric AI for healthcare, smart cities, and democratic resilience, while serving on keyboards and advisory bodies shaping Taiwan’s digital future.

This episode was created in partnership with NOWNEWS, through its English-language platform Taiwan Current News (https://www.tcn.tw/). 

Timestamps:
[00:00] Intro
[02:02] AI as a Societal and Institutional Power
[04:42] Data Commons & Digital Autonomy
[06:17] Tracking China’s Progress in AI 
[07:57]  Synthetic Media Shifting Behavior in Taiwan
[10:37] Risks AI Can Pose to Taiwan in a Crisis
[14:48] Defensive AI Applications: Where to Invest for Deterrence 
[16:42] Strategies for Regulation 
[20:00] Utility of Export Controls on China
[23:37] Cultivating Talent for the AI Work Force 

Episode Transcription

Bonnie Glaser: I'm Bonnie Glaser, managing director of the Indo-Pacific program at the German Marshall Fund of the United States.

Jason Hsu: And I'm Jason Hsu, former legislator from Taiwan, and senior fellow at the Hudson Institute. 

Bonnie Glaser: Welcome to the Taiwan Frontlines podcast. This episode was created in partnership with NOWNEWS through its English language platform, Taiwan Current News.

Jason Hsu: AI has rapidly become a central arena of global competition, shaping everything from national security and economic power to information ecosystems and democratic resilience. The race for AI leadership now spans energy infrastructure, semiconductor supply chains, global talent, and strategic partnerships. As nations seek to build a sustainable and trustworthy technological ecosystem, at the same time, AI is transforming the information domain, enabling unprecedented advances but also amplifying disinformation, influence operations, and cognitive warfare.

Bonnie Glaser: Our guest today is Ethan Tu, the founder of Taiwan AI Labs and a pioneering figure in Taiwan's technology ecosystem. He's best known for founding PTT, Taiwan's largest online bulletin board system. Ethan has led groundbreaking AI research across academia, government, and industry, including work at the United States National Institutes of Health and as Microsoft's Director of AI Research and Development in Asia Pacific, where he helped shape Cortana. Today, Ethan focuses on building human-centric AI for healthcare, smart cities, and democratic resilience, while also serving on key boards and advisory bodies shaping Taiwan's digital future.

Jason Hsu: When you think about AI as a societal and institutional power, rather than military power, what capabilities matter most for Taiwan? In particular, how do data governance, federated learning, and trust shape Taiwan's AI strategy?

Ethan Tu: So in Taiwan, when we think about AI, of course, the way basically we think about the chips sales. So Taiwan is very famous in manufacturing the AI chips; therefore, computing power, high-speed computing machines—that is our strength.

And at the same time, we want to build a trustworthy and responsible AI solution that around the world they can adopt—the artificial intelligence in the very trustworthy and responsible way. Therefore, in Taiwan, for example, we focus on the federated technology. So not only helping the global big tech companies building all the big data centers, but also we help the private companies to building up their own premium solutions.

So the federated large language model, federated data government... we use the federated approach to build out a solution for the hospitals and the banks. Taiwan, earlier we work closely with the central government and also medical centers in Taiwan. So, for example, for the medical centers that we cover, 92% of the medical centers working together to set up the common data model, common protocol, so that not only the computing power, but also the data governance and also the talents—the experts who have the expertise on doing the healthcare solutions—they can use the Federated machine box works with Taiwan AI Lab to deploy their healthcare solution.

For example, during Covid-19, we quickly put together the federated solution like use the chest X-ray collected in the decentralized hospitals. So with the hospital medical image, we can train an AI model to identify the COVID-19 through chest X-ray together.

Jason Hsu: You've emphasized the chips, data commons, and open-source ecosystem. From your perspective, how do these capabilities help Taiwan maintain digital autonomy and reduce dependence on external platforms or foreign models?

Ethan Tu: So for the capability, of course, to advance the chipset manufacturing. That is our strength. And we will not only build a manufacture company for Taiwan, but we also, for example, we collaborate with United States. We set up a campus in United States to help the world to build the computing power, manufacture capability, and at the same time, the data governance capability, Taiwan, we want to set up an example. So how we can work closely with a private and the public crowd to do that data governance. So in the AI area for health, for the financial bank... not only we leverage the big AI model, like ChatGPT or Gemini, but also, we can train in-house AI models that can work together with a big AI models. We release an open-source solution to the open-source data science school—for example, the Federated platform—therefore they can train the AI model, build out the AI capabilities by themselves.

Bonnie Glaser: From what you've observed, Ethan, where has China made the most progress in AI-enabled information warfare? And how does Beijing use AI to shape narratives, manipulate social media behavior, and influence public opinion in Taiwan?

Ethan Tu: Of course, we build out the solutions like a fully autonomous vehicle, autonomous drone, to identify the threat automatically, but also at the same time we have a very big threat from China: the information manipulation on the social media.

In Taiwan earlier we have a platform, we call it Infodemic. So with this infodemic platform, we set up the AI model. We monitor the news, understand the online behavior, automatically, in a very large scale. Then we can identify the real user’s account and also the inauthentic account. So by distinguishing the real user [and the] inauthentic accounts, then we can understand what kind of narratives China is spreading in Taiwan's society. What kind of narrative will cause the victims in Taiwan? Which means the organic user following the narrative and they feel angry and they feel unrest and then they feel afraid of—those are the victims for following these narratives.

Jason Hsu: Ethan, could you walk us through one or two concrete cases where AI-enabled coordination bots or synthetic media measurably shifted the public behavior in Taiwan or elsewhere?

Ethan Tu: We actually look into the information space day to day and every day there are thousands of events we are following. By following subsequent events we can test and establish.

For example, during the COVID-19, there are narratives that's telling people Taiwan is running out of toilet paper. Until right now, the toilet paper information manipulation happened in Taiwan first, then in Japan, then in Australia, then go to the United States. So we can find the same narrative being manipulated in different countries in a same strategy.

In the past, we can easily identify the troll account. They have no profile and they have no head [profile picture]. But later on, after we have a generative AI, we see the troll accounts... they’re troll accounts but with pretty authentic [looks] because they all have profiles, they have timelines. And if you go to the troll, they even can respond to you. So that’s very interesting.

The troll account study find we can easily identify [them]... later on is harder and harder to identify by a normal user. But if we use artificial intelligence to follow the activity, we can still understand the behavior and know they are not real users.

So, for example, before Hamas attack Israel, we actually see a lot of information manipulation. They are creating the same narrative with different actor, different background image, but they are telling the same story. The story is saying, for example, they are telling people the Palestinians are in Gaza for thousands of years and United States bring Israel back to Gaza then and do the genocide. So we see this kind of videos telling the same story once and once again. And a lot of them they are spreading in Mandarin.

Bonnie Glaser: How does AI-enabled information warfare increase the risk of miscalculation or even panic before a conflict even begins?

Ethan Tu: Yeah. So if we look into the risk before Russia invaded Ukraine and before Hamas attack Israel, we actually noticed the cyber warfare, including the information manipulation, is actually happening before the military action.

The risk is usually the very first. For example, they are trying to discredit the Taiwanese government. Before Russia invaded Ukraine, they were saying the Ukraine government is corrupt, for example, and they are saying Ukraine is doing genocide at the east of Ukraine. Similar to Taiwan, we actually see the information manipulation.

For example, China is trying to attract Taiwanese people to become Chinese citizens and they are leveraging dual citizenship in Taiwan. They have Taiwan citizenship, then China citizenship. They are trying to create the chaos, try to reason as though China need to protect those people like in Kinmen. So that's one example we can see.

Also, they are trying to discourage your allies from supporting Taiwan or from supporting Ukraine efforts. For example, during the Russia-Ukraine war, we see information manipulation; they're trying to pretend that Ukrainian people then they attack United States support. Similar behavior we can observe in Taiwan. During COVID-19 we see there’re troll accounts on Twitter, they pretend they are Taiwanese people and they attack Dr. Tedros. And that caused Dr. Tedros saying Taiwan has a war army attacking WHO.

In Taiwan we see a lot of troll accounts... they are trying to discredit United States supporting Taiwan. They pretend they are Taiwanese people and they’re spreading out a narrative saying United States arms sales support Taiwan is because United States want to sell Taiwanese weapons and they actually bring war to Taiwan.

We see a similar pattern in Ukraine as well. Before Russia invaded Ukraine you see a lot of narratives spread out saying NATO and the United States bringing war to Ukraine because the NATO is United States puppet. And also, recently, the prime minister in Japan had some speech saying if something happens to Taiwan, it has also made it to Japan. So these kinds of narratives, we can also see information manipulation where a lot of accounts—they are actually not Taiwanese—where we believe they are China's trolls. But they are pretending they are Taiwanese and they are saying Japanese statement will cause the war in Taiwan.

So actually, democratic countries, we need to work together. We need to protect our democracy together. But there are accounts... they are leveraging the freedom of speech.

Jason Hsu: If Taiwan could invest in owning two or three AI-enabled defenses for the information domain over the next few years, where would these investments have the greatest impact?

Ethan Tu: I think because we are democracy countries, so we are not, we need the solution to detect, to attack, without a solution to censorship the information. But at the same time, I would say the information warfare—they are just like tech. So we need to identify in real time. The real-time identification capability is important.

And at the same time, we should set up the same capability. So if the false narrative runs faster than the trustworthy narratives, then people will confuse. So we need to identify those first. Telling people the strategy of the information warfare. At the same time, we want the trustworthy information can be delivered and can be amplified—the real humans’ statements.

So we want the truth can be amplified and we want the information manipulation can be understood, so people know this threat strategy, so it is like anti-fraud. When we see the scam, the strategy led to the scam or led to the fraud then we can educate our people. If you see these kind of narratives, then you need to be very careful, because those narratives actually try to scam you.

Jason Hsu: Let me just press, Ethan. Different countries have different ways to deal with misinformation. I know, for example, EU, they legislated laws to regulate platforms. We've had debates in Taiwan over platform regulation as well. It sounds to me that you are in a camp of regulating some sort of information. Can you just sort of talk about it a little bit because I wanted to know how exactly we can address this problem rather than just kind of talking about the why and what happened?

Ethan Tu: Yeah. Please. Okay. So if you look into the information manipulation, actually the platform campaigns, they benefit from those information manipulation because the fake account and the fake traffic, actually they are generating revenue.

And at the same time, if our government, for example, Taiwan, we want to let people know what’s the real things happening, we need better advertisement, which means if there's more information manipulation on the platform, Taiwanese government need to spend more money in buying ads.

So, from my point of view, I actually support the regulation. Just like the EU. So if you look into information manipulation, it is actually just like carbon dioxide. So I would like to compare: you want to have a good environment to protect. We need to... if the company has a lot of emission of the carbon dioxide, we want to tax it.

Of course,iIf we look into the information manipulation, if the information manipulation brings benefit to the platform... like a recent study, saying Facebook is benefiting from the fraud traffic and the fraud advertisement. Then this should be the responsibility [of the platform]. The platform companies should remove the information manipulation as much as possible, and make it goal to lure out it. And if not, then we should tax the platform.

From my thinking, the reason Taiwan has a difficulty put into regulation [is] because the big tech platforms, a lot of time, they just don't bother with Taiwan's regulation. And I know in United States, during Biden’s administration there is also an AI Bill of Rights. I think those kinds of effort we should consider, and we should learn from other countries.

Bonnie Glaser: Ethan, as you know, there's this really intense debate underway about the utility of export controls on China, and restrictions on chip sales and whether that's the right approach to prevent China from catching up in AI capabilities and particularly to prevent them from applying their AI capabilities to their developments in the military realm.

Obviously, some people think that if we were to use fewer export controls and just compete openly, that that would be the best way to stay ahead. But that is not necessarily, I think, what is going to prevail right now. We seem to have a mixture, a bit of a pause in some of the US controls, although I do think that the prevailing view in the United States is that export controls have worked to some extent, but that we actually need to close the gaps, the loopholes, in order to make them more effective. So what's your view on this?

Ethan Tu: I would say export control will help in short term. But for long run it’s not good for United States. Export control... if you look into the mainframe solutions, a lot of them, they rely on the high-speed computing. High-speed computing, the manufacturing and also the ecosystem is essential in Taiwan and in United States. Therefore, if we have an export control, of course, China, they will have difficulty producing the computing... the military, the weapon relying on the AI computing power in the large scale.

So, for example, a lot of time people think you want to restrict China for training AI model for the large language model because they can still rent the high-speed computer from other countries. But if you put into the solution for the IoT device, for the autonomous vehicle, for the drone—those chips, they are... so every drone, every autonomous vehicle, you need additional chips so that you cannot rely on, like, renting the computing power to train the AI.

So from this point of view, if we have the export control, we can control the manufacturing speed of China manufacturing autonomous vehicles. But, I will say that from this point of view, it is reasonable. But China is also manipulating, trying to manufacture the alternative chip sale; they have their own media-like chipset, although the generation is a couple generations away. But they are working. 

Jason Hsu: Following Bonnie's question, Ethan, the way I see the US-China AI race, there are several aspects that we need to dig deeper. One is, obviously, as you mentioned, the chips. China would be able to train their models with "good enough" chips and then with a larger quantity and also more sufficient power to get the same level and performance as the US does.

But my question is: talent. Obviously, you know Taiwan is an important hub for AI, as you mentioned, on the chips manufacturing. My question to you is: how would you advise the Taiwan government to continuously cultivate talent for the AI workforce? My first question.

And secondly, China obviously falls behind in the AI chips manufacturing because of export control and etc., but it is catching up on the inferencing model, which is critical to the deployment and diffusion of AI applications. And that's also an area that you specialize in—in medical or in other types of applications. So what would be your suggestion or recommendations to our government who are working on this field? To think about this issue on the influence side, as well as on the application side and talent.

Ethan Tu: Okay, so I think there are three areas: one is talent, one is AI models, and one is the solution side.

Of course, in Taiwan, training the AI talent is also one of our main agendas in the Taiwanese government. So we have the strategy that in the next couple of years, we want to train the industry; we want to train the managers that are able to use AI in the industry for the technical transformation.

And also, in the National Science and Technology Council, we have the project like a large language model—a Taiwanese large training model. We call it Taide. So with the Taide large language model, we actually collect a set of the legacy models we have in Taiwan. We have our schools; we have the best students working on this large language model. So the students know how to build out generative AI models and also know how to tune those AI models with domestic data.

And also Taiwan, earlier we had the project we called TAME (Taiwan Multiple Expert AI Model). With this TAME AI model, we can help our different industries. We released this as open source, so our industry can use this AI model to build out solutions for healthcare or for the financial bank. This open-source model is used for a particular domain.

We also have a closed-source project. We call this Federated GPT because across a lot of data layers through licensing. So with Federated GPT, we have our licensed data to train good AI models. And if you pay a little fee for this licensed model... So Taiwan AI Lab is like a base AI university that trains those AI models, and this model can go to your company and you can further tune it for your usage. This part is already being used in different, particular domains. This is what we built out of the talent’s capability and we also build out the industry capabilities in adopting the solutions.

And you also mentioned China. I see in Taiwan and China, we are doing good things; we are doing very good in small AI models and open-source AI models. But for cross-domain, I will say United States is still the world leader in terms of cross-domain AI models like ChatGPT, Gemini, and Claude. Also for programming, and now Suno for music. So for closed-source AI models, I believe the United States is still the world leadership. But for the open-source AI models for particular domains, Taiwan and China have a lot of adoptions.

Bonnie Glaser: We've been talking with Ethan Tu, who is the founder of Taiwan AI Labs and a leading thinker on AI and technology, not only in Taiwan but also globally. Thanks so much for joining us, Ethan.

Ethan Tu: Thank you, Bonnie. Thank you, Jason.