Artificial intelligence | Tech Wire Asia | Latest Updates & Trends https://techwireasia.com/category/artificial-intelligence/ Where technology and business intersect Mon, 14 Apr 2025 13:10:48 +0000 en-GB hourly 1 https://techwireasia.com/wp-content/uploads/2025/02/cropped-TECHWIREASIA_LOGO_CMYK_GREY-scaled1-32x32.png Artificial intelligence | Tech Wire Asia | Latest Updates & Trends https://techwireasia.com/category/artificial-intelligence/ 32 32 First it was Ghibli, now it’s the AI Barbie Box trend https://techwireasia.com/2025/04/first-it-was-ghibli-now-its-the-ai-barbie-box-trend/ Mon, 14 Apr 2025 13:10:48 +0000 https://techwireasia.com/?p=241723 Following the Ghibli portraits, the AI Barbie trend comes to LinkedIn. Blending nostalgia with self-promotion, produces brand interest but little celebrity uptake. After gaining attention with Studio Ghibli-style portraits, ChatGPT’s image generator is now powering a new wave of self-representation online – this time with users turning themselves into plastic action figures. What began as […]

The post First it was Ghibli, now it’s the AI Barbie Box trend appeared first on TechWire Asia.

]]>
  • Following the Ghibli portraits, the AI Barbie trend comes to LinkedIn.
  • Blending nostalgia with self-promotion, produces brand interest but little celebrity uptake.
  • After gaining attention with Studio Ghibli-style portraits, ChatGPT’s image generator is now powering a new wave of self-representation online – this time with users turning themselves into plastic action figures.

    What began as a quirky trend on LinkedIn has now spread to platforms like Instagram, TikTok, Facebook, and X. The trend includes different takes, but the “AI Action Figure” version is among the most common. It typically shows a person recreated as a doll encased in a plastic blister pack, often accessorised with work-related items like laptops, books, or coffee mugs. That’s fitting, considering the trend’s initial traction among professionals and marketers on LinkedIn.

    Other versions draw inspiration from more recognisable aesthetics, like the “Barbie Box Challenge,” where the AI-generated figure is styled to resemble a vintage Barbie.

    The rise of the virtual dolls follows the earlier success of the Studio Ghibli-style portraits, which pushed ChatGPT’s image capabilities into the spotlight. That earlier trend sparked some backlash related to environmental, copyright, and creative concerns – but so far, the doll-themed offshoot hasn’t drawn the same level of criticism.

    What’s notable about the trends is the consistent use of ChatGPT as the generator of choice. OpenAI’s recent launch of GPT-4o, which includes native image generation, attracted such a large volume of users that the firm had to temporarily limit image output and delay rollout for free-tier accounts.

    While the popularity of action figures hasn’t yet matched that of Ghibli portraits, it does highlight ChatGPT’s role in introducing image tools to a broader user base. Many of these doll images are shared by users with low engagement, and mostly in professional circles. Some brands, including Mac Cosmetics and NYX, have posted their own versions, but celebrities and influencers have largely stayed away. One notable exception is US Representative Marjorie Taylor Greene, who shared a version of herself with accessories including a Bible and a gavel, calling it “The Congresswoman MTG Starter Kit.”

    What the AI Barbie trend looks like

    The process involves uploading a photo into ChatGPT and prompting it to create a doll or action figure based on the image. Many users opt for the Barbie aesthetic, asking for stylised packaging and accessories that reflect their personal or professional identity. The final output often mimics retro Barbie ads from the 1990s or early 2000s. Participants typically specify details like:

    • The name to be displayed on the box
    • Accessories, like pets, smartphones, or coffee mugs
    • The desired pose, facial expression, or outfit
    • Packaging design elements like colour or slogans

    Users often iterate through several versions, adjusting prompts to better match their expectations. The theme can vary widely – from professional personas to hobbies or fictional characters – giving the trend a broad creative range.

    How the trend gained momentum

    The idea gained visibility in early 2025, beginning on LinkedIn where users embraced the “AI Action Figure” format. The Barbie-style makeover gained traction over time, tapping into a blend of nostalgia and visual novelty. Hashtags like #aibarbie and #BarbieBoxChallenge have helped to spread the concept. While the Barbie-inspired version has not gone as viral as the Ghibli-style portraits, it has maintained steady traction on social media, especially among users looking for lighthearted ways to express their personal branding.

    https://youtube.com/watch?v=Z6S6zQQ8sCQ%3Fsi%3DPJOwLgHWngf21YhL

    Using ChatGPT’s image tool

    To participate, users must access ChatGPT’s image generation tool, available with GPT-4o. The process begins by uploading a high-resolution photo – preferably full-body – and supplying a prompt that describes the desired figurine.

    To improve accuracy, prompts usually include:

    • A theme (e.g., office, workout, fantasy)
    • Instructions for how the figure should be posed
    • Details about clothing, mood, or accessories
    • A note to include these elements inside a moulded box layout

    Reiterating the intended theme helps ensure consistent results. While many focus on work-related personas, the style is flexible – some choose gym-themed versions, others opt for more humorous or fictional spins.

    Behind the spike in image activity

    ChatGPT’s image generation tool launched widely in early 2025, and its use quickly surged. According to OpenAI CEO Sam Altman, the demand became so intense that GPU capacity was stretched thin, prompting a temporary cap on image generation for free users. Altman described the load as “biblical demand” in a social media post, noting that the feature had drawn more than 150 million active users in its first month. The tool’s ability to generate everything from cartoons to logos – and now custom action figures – has played a central role in how users explore visual identity through AI.

    The post First it was Ghibli, now it’s the AI Barbie Box trend appeared first on TechWire Asia.

    ]]>
    Google introduces Ironwood TPU to power large-scale AI inference https://techwireasia.com/2025/04/google-introduces-ironwood-tpu-to-power-large-scale-ai-inference/ Thu, 10 Apr 2025 09:59:56 +0000 https://techwireasia.com/?p=241711 Google’s Ironwood TPU is purpose-built for AI inference. Designed to support high-demand applications like LLMs and MoE models. Google has introduced Ironwood, its seventh-generation Tensor Processing Unit (TPU) at Google Cloud Next 2025. The processor unit is specifically designed to support large-scale inference workloads. The chip marks a shift in focus from training to inference, […]

    The post Google introduces Ironwood TPU to power large-scale AI inference appeared first on TechWire Asia.

    ]]>
  • Google’s Ironwood TPU is purpose-built for AI inference.
  • Designed to support high-demand applications like LLMs and MoE models.
  • Google has introduced Ironwood, its seventh-generation Tensor Processing Unit (TPU) at Google Cloud Next 2025. The processor unit is specifically designed to support large-scale inference workloads.

    The chip marks a shift in focus from training to inference, reflecting broader changes in how AI models are used in production environments. TPUs have been a core part of Google’s infrastructure for several years, powering internal services and customer applications. Ironwood continues with enhancements for the next wave of AI applications – including large language models (LLMs), Mixture of Experts (MoEs), and other compute-intensive tools that require real-time responsiveness and scalability.

    Inference takes centre stage

    Ironwood is designed to support what Google calls the “age of inference,” in which AI systems interpret and generate insights actively, rather than just responding to inputs. The shift is reshaping how AI models are deployed, particularly in business use, where continuous, low-latency performance is important.

    Ironwood represents a number of architectural upgrades: Each chip provides 4,614 teraflops at peak performance, supported by 192GB of high bandwidth memory and up to 7.2 terabytes per second of memory bandwidth – significantly more than in previous TPUs.

    The expanded memory and throughput are to support models requiring rapid access to large datasets, like those used in search, recommendation systems, and scientific computing.

    Ironwood also features an improved version of SparseCore, a component aimed at accelerating ultra-large embedding models that are often used in ranking and personalisation tasks.

    Scale and connectivity

    Ironwood’s scalability means it can be deployed in configurations from 256 to 9,216 chips in a single pod. At full scale, a pod delivers 42.5 exaflops of compute, making it more than 24 times more powerful than the El Capitan supercomputer, which tops out at 1.7 exaflops.

    To support this level of distributed computing, Ironwood includes a new version of Google’s Inter-Chip Interconnect, which can communicate bidirectionally at 1.2 terabits per second. This helps reduce bottlenecks so data can move more efficiently across thousands of chips during training or inference. Ironwood is integrated with Pathways, Google’s distributed machine learning runtime developed by DeepMind. Pathways allows workloads to run on multiple pods, letting developers orchestrate tens or hundreds of thousands of chips for a single model or application.

    Efficiency and sustainability

    Power efficiency metrics show that Ironwood has twice the performance per watt as its predecessor, Trillium, able to sustain high output under sustained workloads. The TPU has a liquid-based cooling system, and according to Google, is nearly 30 times more power-efficient than the first Cloud TPU introduced in 2018. The emphasis on energy efficiency reflects growing concerns about the environmental impact of large-scale AI infrastructure, particularly as demand continues to grow.

    Supporting real-world applications

    Ironwood’s architecture supports “thinking models,” which are used increasingly in real-time applications like chat interfaces and autonomous systems. The TPU’s capabilities also offer the potential for use in finance, logistics, and bio-informatics workloads, which require fast, large-scale computations. Google has integrated Ironwood into its Cloud AI Hypercomputer strategy, which combines custom hardware and tools like Vertex AI.

    What comes next

    Google plans to make Ironwood publicly-available later this year to support workloads like Gemini 2.5 and AlphaFold, and the unit is expected to be used in research and production environments that demand large-scale distributed inference.

    The post Google introduces Ironwood TPU to power large-scale AI inference appeared first on TechWire Asia.

    ]]>
    DeepSeek’s new technology makes AI actually understand what you’re asking for https://techwireasia.com/2025/04/deepseeks-new-technology-makes-ai-actually-understand-what-youre-asking-for/ Wed, 09 Apr 2025 08:26:44 +0000 https://techwireasia.com/?p=241688 DeepSeek’s AI feedback systems help make AI understand what humans want. Method allows smaller AI models to perform as well as larger cousins. Potential to reduce cost of training. Chinese AI company DeepSeek has developed a new approach to AI feedback systems that could transform how artificial intelligence learns from human preferences. Working with Tsinghua […]

    The post DeepSeek’s new technology makes AI actually understand what you’re asking for appeared first on TechWire Asia.

    ]]>
  • DeepSeek’s AI feedback systems help make AI understand what humans want.
  • Method allows smaller AI models to perform as well as larger cousins.
  • Potential to reduce cost of training.
  • Chinese AI company DeepSeek has developed a new approach to AI feedback systems that could transform how artificial intelligence learns from human preferences.

    Working with Tsinghua University researchers, DeepSeek’s innovation tackles one of the most persistent challenges in AI development: teaching machines to understand what humans genuinely want from them. The breakthrough is detailed in a research paper “Inference-Time Scaling for Generalist Reward Modeling,” and introduces a technique making AI responses more accurate and efficient – a win-win in the AI world where better performance typically demands more computing power.

    Teaching AI to understand human preferences

    At the heart of DeepSeek’s innovation is a new approach to what experts call “reward models” – essentially the feedback mechanisms that guide how AI systems learn. Think of reward models as digital teachers. When an AI responds, models provide feedback on how good that response was, helping the AI improve over time. The problem has always been how to create reward models that accurately reflect human preferences across many different types of questions. DeepSeek’s solution combines two techniques:

    1. Generative Reward Modeling (GRM): Uses language to represent rewards, providing richer feedback than previous methods that relied on simple numerical scores.
    2. Self-Principled Critique Tuning (SPCT): Allows the AI to adaptively generate its guiding principles and critiques through online reinforcement learning.

    Zijun Liu, a researcher from Tsinghua University and DeepSeek-AI who co-authored the paper, explains that this combination allows “principles to be generated based on the input query and responses, adaptively aligning reward generation process.”

    Doing more with less

    What makes DeepSeek’s approach particularly valuable is “inference-time scaling.” Rather than requiring more computing power during the training phase, the method allows for performance improvements when the AI is used – the ‘point of inference’.

    The researchers demonstrated that their method achieves better results with increased sampling during inference, potentially allowing smaller models to match the performance of much larger ones. The efficiency breakthrough comes at a important moment in AI development when the relentless push for larger models raises concerns about sustainability, supply chain viability, and accessibility.

    What this means for the future of AI

    DeepSeek’s innovation in AI feedback systems could have far-reaching implications:

    • More accurate AI responses: Better reward models mean AI systems receive more precise feedback, improving outputs over time.
    • Adaptable performance: The ability to scale performance during inference allows AI systems to adjust to different computational constraints.
    • Broader capabilities: AI systems can perform better across many tasks by improving reward modelling for general domains.
    • Democratising AI development: If smaller models can achieve similar results to larger models via better inference methods, AI research could become more accessible to those with limited resources.

    DeepSeek’s rising influence

    The latest advance adds to DeepSeek’s growing reputation in the AI field. Although founded only in 2023 by entrepreneur Liang Wenfeng, the Hangzhou-based company has made an impact with the V3 foundation model and R1 reasoning model. The company recently upgraded its V3 model (DeepSeek-V3-0324), which it said offered “enhanced reasoning capabilities, optimised front-end web development and upgraded Chinese writing proficiency.”

    DeepSeek has also committed to open-source its AI technology, by opening five public code repositories in February which allow developers to review and contribute to software development.

    According to the research paper, DeepSeek intends to make its GRM models open-source, although no specific timeline has been provided. Its decision could accelerate progress in the field by allowing broader experimentation with this type of advanced AI feedback system.

    Beyond bigger is better

    As AI continues to evolve rapidly, DeepSeek’s work demonstrates that innovations in how models learn can be just as important as increasing their size. By focusing on the quality and scalability of feedback, DeepSeek addresses one of the challenges to create AI that better understands and aligns with human preferences.

    This possible breakthrough in AI feedback systems suggests the future of artificial intelligence may depend not just on raw computing power but on more intelligent and efficient methods that better capture the nuances of human preferences.

    The post DeepSeek’s new technology makes AI actually understand what you’re asking for appeared first on TechWire Asia.

    ]]>
    Viral Ghibli feature drives ChatGPT surge—What you should know before uploading photos https://techwireasia.com/2025/04/viral-ghibli-feature-drives-chatgpt-surge/ Tue, 08 Apr 2025 13:04:25 +0000 https://techwireasia.com/?p=241676 Ghibli-style art pushes ChatGPT’s activity to new highs. OpenAI says working to scale capacity for GPT-4o image tools. ChatGPT’s internet traffic has skyrocketed due to a spike in interest in AI-generated images styled after Studio Ghibli animations. OpenAI noticed a large increase in engagement following the release of its image-generation tool, which enables users to […]

    The post Viral Ghibli feature drives ChatGPT surge—What you should know before uploading photos appeared first on TechWire Asia.

    ]]>
  • Ghibli-style art pushes ChatGPT’s activity to new highs.
  • OpenAI says working to scale capacity for GPT-4o image tools.
  • ChatGPT’s internet traffic has skyrocketed due to a spike in interest in AI-generated images styled after Studio Ghibli animations.

    OpenAI noticed a large increase in engagement following the release of its image-generation tool, which enables users to create artwork reminiscent of classic titles like Spirited Away and My Neighbor Totoro. Data from Similarweb shows that weekly active users passed 150 million for the first time this year.

    OpenAI CEO Sam Altman said on social media that the platform added one million users in a single hour – surpassing previous growth records. SensorTower reported that downloads and revenue through the ChatGPT app also increased. Weekly downloads rose by 11%, active users by 5%, and in-app purchase revenue by 6% compared to the previous month.

    The rapid increase in use put pressure on the platform’s infrastructure. Users reported slowdowns and brief outages, forcing Altman to caution that future features may face delays while OpenAI manages capacity

    ChatGPT's weekly average users hit record high (Source - Similarweb)
    ChatGPT’s weekly average users hit record high (Source – Similarweb)

    Legal and copyright concerns with the ChatGPT x Ghibli

    The viral trend has prompted discussion around copyright. Some legal experts have raised questions about whether closely-replicating distinctive animation styles could cross into infringement.

    “The legal landscape of AI-generated images mimicking Studio Ghibli’s distinctive style is an uncertain terrain. Copyright law has generally protected only specific expressions rather than artistic styles themselves,” said Evan Brown, a partner at law firm Neal & McDevitt.

    OpenAI did not respond to questions about how its models were trained or whether copyrighted materials influenced its image generator. Studio Ghibli has not issued a formal statement, but commentary from its co-founders has resurfaced.

    Hayao Miyazaki’s 2016 reaction to an early AI-generated image drew attention last week. In a widely circulated video, he described the technology as “an insult to life itself.” The full clip shows him responding specifically to a zombie-like AI render, which he called “extremely unpleasant.”

    In a recent interview, Studio Ghibli’s managing director Goro Miyazaki acknowledged the growing capabilities of AI. He claimed that AI-generated films could become a reality in the coming years, but questioned whether audiences would embrace them. He also acknowledged that while new technology could lead to new creative voices, it may be difficult to replicate the sensibilities of previous generations. “Nowadays, the world is full of opportunities to watch anything, anytime, anywhere,” he said, suggesting that younger artists may not share the same experiences that shaped Ghibli’s earlier works.

    Studio concerns and industry shifts

    Japan faces a shortage of trained animators, in part due to long hours and low wages in the industry. Goro noted that Gen Z may be less inclined to pursue the traditionally labour-intensive career path of hand-drawn animation.

    AI tools are emerging as a faster, lower-cost alternative to visual storytelling. Studio Ghibli’s legacy includes a number of films that blend fantastical themes with personal and historical reflections. Miyazaki’s latest work, The Boy and the Heron, earned an Academy Award and may be his final project. Goro has contributed his own directorial efforts, including Tales from Earthsea and From Up on Poppy Hill, and helped develop the Ghibli Museum and Ghibli Park.

    User privacy and data security

    As more users upload personal images to generate stylised portraits, privacy advocates are raising concerns about how that data is collected and used. “When you upload a photo to an AI art generator, you’re giving away your biometric data (your face). Some AI tools store that data, use it to train future models, or even sell it to third parties – none of which you may be fully aware of unless you read the fine print,” said Christoph C. Cemper, founder of AIPRM.

    OpenAI’s privacy policy confirms the platform collects user-provided and automatically generated data, including images. Unless users opt out or request data deletion, content may be retained and used to train future models.

    Cemper said that uploaded images could be misused. Personal data may appear in public datasets, like LAION-5B, which has been linked to the training of tools like Stable Diffusion and Google Imagen. One reported case involved a user finding private medical images in a public dataset. Cemper said that AI-generated content has already been used to produce fabricated documents and images, adding that deepfake risks are increasing. “There are too many real-world verification flows that rely on ‘real images’ as proof. That era is over,” one user wrote on social media.

    Navigating licensing and user rights between ChatGPT and Ghibli

    Cemper urged users to be aware of broad licensing terms buried in AI platform policies. Terms like “non-exclusive,” “royalty-free,” and “irrevocable license” can give platforms broad rights over uploaded content. The rights may extend even after the user stops using the service.

    Creating AI art in the style of well-known brands could also present legal challenges. Artistic styles like those of Studio Ghibli, Disney, and Pixar are closely associated with their original creators, and mimicking them may fall under derivative work protections.

    In late 2022, several artists filed lawsuits against AI firms, alleging their work was used without permission to train image generators. The ongoing legal challenges highlight the tension between creative freedom and intellectual property rights.

    Cemper added: “The rollout of ChatGPT’s 4o image generator shows just how powerful AI has become as it replicates iconic artistic styles with just a few clicks. But this unprecedented capability comes with a growing risk – the lines between creativity and copyright infringement are increasingly blurred, and the risk of unintentionally violating intellectual property laws continues to grow. While these trends may seem harmless,creators must be aware that what may appear as a fun experiment could easily cross into legal territory.

    “The rapid pace of AI development also raises significant concerns about privacy and date security. With more users engaging with AI tools, there’s a pressing need for clearer, more transparent privacy policies. Users should be empowered to make informed decisions about uploading their photos or personal data – especially when they may not realise how their information is being stored, shared, or used.”

    The post Viral Ghibli feature drives ChatGPT surge—What you should know before uploading photos appeared first on TechWire Asia.

    ]]>
    Microsoft pauses data centre investment in Indonesia, US, and UK https://techwireasia.com/2025/04/microsoft-pauses-key-builds-in-indonesia-us-and-uk-amid-infrastructure-review/ Fri, 04 Apr 2025 09:04:45 +0000 https://techwireasia.com/?p=241657 Microsoft pauses or delays data centre projects in the UK, US, and Indonesia. Rivals Oracle and OpenAI ramp up investments. Microsoft is scaling back or delaying data centre developments in several countries, including Indonesia, the UK, Australia, and in certain US states, as it reassesses strategy. According to individuals familiar with the matter, ongoing talks […]

    The post Microsoft pauses data centre investment in Indonesia, US, and UK appeared first on TechWire Asia.

    ]]>
  • Microsoft pauses or delays data centre projects in the UK, US, and Indonesia.
  • Rivals Oracle and OpenAI ramp up investments.
  • Microsoft is scaling back or delaying data centre developments in several countries, including Indonesia, the UK, Australia, and in certain US states, as it reassesses strategy.

    According to individuals familiar with the matter, ongoing talks and planned builds have been paused in North Dakota, Illinois, Wisconsin, the UK midlands and Jakarta, Indonesia. The pullback comes amid questions about whether expected demand for AI services can support the pace and cost of Microsoft’s global data centre expansion.

    Microsoft has acknowledged changing its strategy but declined to provide details about specific projects. “We plan our data centre capacity needs years in advance to ensure we have sufficient infrastructure in the right places,” a Microsoft spokesperson said. “As AI demand continues to grow, and our data centre presence continues to expand, the changes we have made demonstrates the flexibility of our strategy.”

    Some of the shelved plans include a site near Chicago, and a proposed lease near Cambridge in the UK for a facility to host Nvidia hardware. Microsoft has also paused work at a site in Mount Pleasant, Wisconsin, where development has already cost US$262 million, according to documents reviewed by Bloomberg.

    In Jakarta, parts of a data centre campus have been placed on hold. Elsewhere, Microsoft has walked away from a proposal to acquire more capacity from cloud infrastructure company CoreWeave. CoreWeave’s CEO Michael Intrator confirmed the decision, but did not specify which locations were affected.

    In other cases, negotiations have slowed rather than stopped. At a server farm in North Dakota originally earmarked for Microsoft, discussions stalled until an exclusivity clause lapsed. Applied Digital, the data centre operator, has since found other tenants and secured funding to proceed with development.

    At Ada Infrastructure’s Docklands site in London, Microsoft was in talks for about leasing 210-megawatt of capacity, but has is holding off on committing. The site is now being shown to other potential tenants, according to sources familiar with the matter.

    Microsoft says it remains committed to key projects, which include a US$3.3 billion facility in Wisconsin and the launch of the Indonesia Central cloud region in mid-2025. It has maintained that it will spend roughly US$80 billion on data centre buildouts in its current fiscal year but signalled a shift in its next fiscal year toward equipping existing sites rather than construction of new data centres.

    While Microsoft is re-evaluating, other firms are pressing on with large-scale infrastructure. OpenAI, Oracle, and SoftBank have announced joint venture Stargate, which aims to invest up to US$500 billion in AI infrastructure in the US. Stargate’s first phase includes a US$100 billion deployment in Texas, intended to support large-scale AI development.

    The contrast in strategy between competing hyperscalers has drawn attention from investors and analysts. TD Cowen reported that Microsoft has abandoned projects amounting to two gigawatts of electricity capacity across the US and Europe. The firm suggested this may indicate a mismatch between expected demand and Microsoft’s existing capacity. Analysts also speculated that OpenAI may be shifting workloads from Microsoft to Oracle.

    The change in infrastructure strategy is also being influenced by developments in the technology. Chinese AI firm DeepSeek claims it can deliver competitive AI performance using fewer resources, raising the possibility that future AI systems may require less computing power than originally anticipated.

    At the same time, Microsoft’s adjustments may reflect external constraints. In cities like Dublin and Amsterdam, data centre growth has been met with tighter regulation due to concerns over electricity consumption and environmental sustainability. Dublin has limited new grid connections for data centres, while Amsterdam previously paused all new development to address strain on local resources.

    Industry observers say hyperscalers are increasingly shifting focus to projects that can deliver results more quickly and cost-effectively. “You may have initially thought one data centre project would be the fastest speed to market, but then you realise that the labour, supply chain and power delivery wasn’t as quick as you thought,” said Ed Socia, director at datacentreHawk. “Then you would have to shift in the short term to focus on other markets.”

    CoreWeave’s Michael Intrator said that Microsoft’s retreat appears to be specific to its situation. “It’s pretty localised, and their relationship with OpenAI has just changed,” he said.

    The post Microsoft pauses data centre investment in Indonesia, US, and UK appeared first on TechWire Asia.

    ]]>
    Google warns of North Korean freelancers targeting European firms https://techwireasia.com/2025/04/google-warns-of-north-korean-freelancers-targeting-european-firms/ Fri, 04 Apr 2025 02:04:45 +0000 https://techwireasia.com/?p=241650 North Korean IT workers are increasingly targeting companies in Europe. Google Threat Intelligence Group reports that this shift follows tighter enforcement in the US. A growing number of North Korean IT workers are posing as remote freelancers from other countries in an effort to gain access to companies in Europe, raising concerns about potential espionage, […]

    The post Google warns of North Korean freelancers targeting European firms appeared first on TechWire Asia.

    ]]>
  • North Korean IT workers are increasingly targeting companies in Europe.
  • Google Threat Intelligence Group reports that this shift follows tighter enforcement in the US.
  • A growing number of North Korean IT workers are posing as remote freelancers from other countries in an effort to gain access to companies in Europe, raising concerns about potential espionage, data theft, and operational disruption.

    According to Google’s Threat Intelligence Group (GTIG), these workers—who refer to themselves as “warriors”—are securing remote roles with foreign organisations to generate revenue for the Democratic People’s Republic of Korea (DPRK). The activity, previously concentrated in the United States, is now increasingly being observed in European countries such as Germany, the United Kingdom, and Portugal.

    Since GTIG’s last report on DPRK IT worker activity, recent crackdowns in the US have made it more difficult for these individuals to secure and maintain employment there. According to a blog post by Jamie Collier, lead adviser for Europe at Google’s Threat Intelligence Group, GTIG has observed a rise in operations globally, with particular growth in Europe over the past few months. Countries targeted include Germany, the UK, and Portugal.

    North Korea increases IT worker operations globally
    North Korea increases IT worker operations globally (Source – Google)

    The workers often misrepresent their nationalities, claiming to be from countries such as Italy, Japan, Malaysia, Singapore, Ukraine, the United States, and Vietnam. They find jobs through freelance platforms like Upwork and Freelancer, as well as communication channels such as Telegram. Payments are typically made in cryptocurrency or through digital payment platforms including Wise and Payoneer.

    Upwork provided a statement following publication, clarifying it did not receive the initial request for comment. The company said:

    “Fraud prevention and compliance with US and international sanctions are critical priorities for Upwork. The tactics outlined in this report represent a challenge that affects the entire online work industry, and Upwork is at the forefront of combating these threats. Any attempt to use a false identity, misrepresent location, or take advantage of Upwork customers is a strict violation of our terms of use, and we take aggressive action to detect, block, and remove bad actors from our platform.

    Upwork has long invested in industry-leading security and identity verification measures, deploying advanced technology alongside a dedicated team of global professionals across legal, investigations, intelligence, identity risk management, compliance, anti-money laundering, and machine learning detection. These experts work relentlessly to prevent fraudulent activity before it reaches our customers, and quickly respond to new methodologies and trends.

    As fraud tactics evolve, Upwork continuously enhances its proactive screening for attempts to bypass geographic restrictions, monitoring for signs of misrepresentation both before and after contracts begin. Our sophisticated detection tools, paired with strong partnerships with law enforcement and regulatory bodies, enable us to take swift and decisive action when fraudulent behaviour is identified.

    While no online platform is immune to fraud, Upwork is setting the standard for trust and safety in the industry. We will continue to invest in cutting-edge fraud prevention measures and vendor solutions, collaborate with industry stakeholders, and innovate to protect our customers and uphold the integrity of our marketplace.”

    Freelancer, Telegram, Wise, and Payoneer did not respond to requests for comment.

    GTIG reports that since October, there has been an uptick in cases where previously terminated workers attempt to extort their former employers by threatening to leak sensitive company information to competitors. Collier suggested that mounting pressure on these workers may be pushing them toward more aggressive tactics to maintain income.

    One case in late 2024 involved a North Korean individual operating under at least 12 separate identities while applying to organisations in the defence and public sectors, reportedly using false references. In the UK, North Korean IT workers have been linked to work ranging from standard web development to more advanced projects in blockchain and artificial intelligence.

    Google’s research points to risks associated with bring-your-own-device (BYOD) policies, where employees use personal devices to access internal systems. These setups often lack proper security oversight, making it more difficult to detect unauthorised access.

    Authorities in the US and UK have issued multiple warnings about these activities. The FBI has advised firms to improve identity verification practices, while the US Treasury in January sanctioned two individuals and four entities accused of generating revenue for the North Korean government. Officials allege the regime withholds up to 90% of wages earned by these workers.

    In a separate legal action, a US federal court in Missouri indicted 14 North Korean nationals in December for allegedly participating in an employment scheme that generated US$88 million over six years. Some of these individuals were reportedly employed by US firms for extended periods, earning hundreds of thousands of dollars without detection.

    The UK’s Office of Financial Sanctions Implementation has also responded. In September, it recommended employers implement stricter identity checks, including video interviews, and advised against using cryptocurrency for payments.

    Collier noted that North Korea has a long history of engaging in cyber operations to fund its regime. “A decade of diverse cyberattacks (encompassing SWIFT targeting, ransomware, cryptocurrency theft, and supply chain compromise), precedes North Korea’s latest surge,” he wrote.

    “This relentless innovation demonstrates a longstanding commitment to fund the regime through cyber operations. Given DPRK IT workers’ operational success, North Korea will likely broaden its global reach. With APAC already impacted by these operations, this problem is set to escalate. These campaigns thrive on ignorance and will likely enjoy particular success in areas of APAC with less awareness of the threat.”

    The post Google warns of North Korean freelancers targeting European firms appeared first on TechWire Asia.

    ]]>
    Ant Group develops AI models using Chinese chips to lower training costs https://techwireasia.com/2025/04/ant-group-develops-ai-models-using-chinese-chips-to-lower-training-costs/ Wed, 02 Apr 2025 09:12:52 +0000 https://techwireasia.com/?p=241645 Ant Group uses Chinese chips and MoE models to cut AI training costs and reduce reliance on Nvidia. Releases open-source AI models, claiming strong benchmark results with domestic hardware. Chinese Alibaba affiliate company, Ant Group, is exploring new ways to train LLMs and reduce dependency on advanced foreign semiconductors. According to people familiar with the […]

    The post Ant Group develops AI models using Chinese chips to lower training costs appeared first on TechWire Asia.

    ]]>
  • Ant Group uses Chinese chips and MoE models to cut AI training costs and reduce reliance on Nvidia.
  • Releases open-source AI models, claiming strong benchmark results with domestic hardware.
  • Chinese Alibaba affiliate company, Ant Group, is exploring new ways to train LLMs and reduce dependency on advanced foreign semiconductors.

    According to people familiar with the matter, the company has been using domestically-made chips – including those supplied by Alibaba and Huawei – to support the development of cost-efficient AI models through a method known as Mixture of Experts (MoE).

    The results have reportedly been on par with models trained using Nvidia’s H800 GPUs, which are among the more powerful chips currently restricted from export to China. While Ant continues to use Nvidia hardware for certain AI tasks, sources said the company is shifting toward other options, like processors from AMD and Chinese alternatives, for its latest development work.

    The strategy reflects a broader trend among Chinese firms looking to adapt to ongoing export controls by optimising performance with locally available technology.

    The MoE approach has grown in popularity in the industry, particularly for its ability to scale AI models more efficiently. Rather than processing all data through a single large model, MoE structures divide tasks into smaller segments handled by different specialised “experts.” The division helps reduce the computing load and allows for better resource management.

    Google and China-based startup DeepSeek have also applied the method, seeing similar gains in training speed and cost-efficiency.

    Ant’s latest research paper, published this month, outlines how the company has been working to lower training expenses by not relying on high-end GPUs. The paper claims the optimised method can reduce the cost of training 1 trillion tokens from around 6.35 million yuan (approximately $880,000) using high-performance chips to 5.1 million yuan, using less advanced, more readily-available hardware. Tokens represent pieces of information that AI models process during training to learn patterns, in order to generate text, or complete tasks.

    According to the paper, Ant has developed two new models – Ling-Plus and Ling-Lite – which it now plans to offer in various industrial sectors, including finance and healthcare. The company recently acquired Haodf.com, an online medical services platform, as part of its broader push for AI-driven healthcare services. It also runs the AI life assistant app Zhixiaobao and a financial advisory platform known as Maxiaocai.

    Ling-Plus and Ling-Lite have been open-sourced, with the former consisting of 290 billion parameters and the latter 16.8 billion. Parameters in AI are tunable elements that influence a model’s performance and output. While these numbers are smaller than the parameter count anticipated for advanced models like OpenAI’s GPT-4.5 (around 1.8 trillion), Ant’s offerings are nonetheless regarded as sizeable by industry standards.

    For comparison, DeepSeek-R1, a competing model also developed in China, contains 671 billion parameters.

    In benchmark tests, Ant’s models were said to perform competitively. Ling-Lite outpaced a version of Meta’s Llama model in English-language understanding, while both Ling models outperformed DeepSeek’s offerings on Chinese-language evaluations. The claims, however, have not been independently verified.

    The paper also highlighted some technical challenges the organisation faced during model training. Even minor adjustments to the hardware or model architecture resulted in instability, including sharp increases in error rates. These issues illustrate the difficulty of maintaining model performance while shifting away from high-end GPUs that have become the standard in large-scale AI development.

    Ant’s research indicates a rise in effort among Chinese companies to achieve more technological self-reliance. With US export limitations limiting access to Nvidia’s most advanced chips, companies like Ant are seeking ways to build competitive AI tools using alternative resources.

    Although Nvidia’s H800 chip is not the most powerful in its lineup, it remains one of the most capable processors available to Chinese buyers. Ant’s ability to train models of comparable quality without such hardware signals a potential path forward for companies affected by trade controls.

    At the same time, the broader industry dynamics continue to evolve. Nvidia CEO Jensen Huang has said that increasing computational needs will drive demand for more powerful chips, even as efficiency-focused models gain traction. Despite alternative strategies like those explored by Ant, his view suggests that advanced GPU development will continue to be prioritised.

    Ant’s effort to reduce costs and rely on domestic chips could influence how other firms approach AI training – especially in markets facing similar constraints. As China accelerates its push toward AI independence, developments like these are likely to draw attention across both the tech and financial landscapes.

    The post Ant Group develops AI models using Chinese chips to lower training costs appeared first on TechWire Asia.

    ]]>
    AI race intensifies: China narrows the gap https://techwireasia.com/2025/03/ai-race-intensifies-china-narrows-the-gap/ Thu, 27 Mar 2025 13:54:25 +0000 https://techwireasia.com/?p=241606 China is closing the gap with the US in AI technology advancements. DeepSeek’s open-source models demonstrate improvements through algorithmic efficiency. The artificial intelligence race between China and the United States has entered a new phase as Chinese companies narrow the technology gap despite Western sanctions. According to Lee Kai-fu, CEO of Chinese startup 01.AI and […]

    The post AI race intensifies: China narrows the gap appeared first on TechWire Asia.

    ]]>
  • China is closing the gap with the US in AI technology advancements.
  • DeepSeek’s open-source models demonstrate improvements through algorithmic efficiency.
  • The artificial intelligence race between China and the United States has entered a new phase as Chinese companies narrow the technology gap despite Western sanctions.

    According to Lee Kai-fu, CEO of Chinese startup 01.AI and former head of Google China, the gap in core technologies has shrunk from “six to nine months” to “probably three months,” with China actually pulling ahead in specific areas like infrastructure software engineering. The Chinese AI startup DeepSeek has become the epicentre of the intensifying technological rivalry.

    On January 20, 2025, while the world’s attention was fixed on Donald Trump’s inauguration, DeepSeek quietly launched its R1 model – a low-cost, open-source, high-performance large language model with capabilities reportedly rivalling or surpassing OpenAI’s ChatGPT-4, but at a fraction of the cost.

    “The fact that DeepSeek can figure out the chain of thought with a new way to do reinforcement learning is either catching up with the US, learning quickly, or maybe even more innovative now,” Lee told Reuters, referring to how DeepSeek models show users their reasoning process before delivering answers.

    Innovative efficiency: China’s response to chip sanctions

    DeepSeek’s achievement is particularly notable because it emerged despite US restrictions on advanced processor chip exports to China. Instead of being hampered by international limitations, Chinese companies have responded by optimising efficiency and compensating for lower-quality hardware with quantity.

    The adaptive approach was demonstrated further on March 25, 2025, when DeepSeek upgraded its V3 large language model. The new version, DeepSeek-V3-0324, features enhanced reasoning capabilities, optimised front-end web development, and upgraded Chinese writing proficiency. DeepSeek-V3-0324 significantly improved in several benchmark tests, especially in mathematics and coding. Häme University lecturer Kuittinen Petri highlighted the significance of these advancements, stating on social media:

    “DeepSeek is doing all this with just [roughly] 2% [of the] money resources of OpenAI.” He added that when he asked the new model to “create a great-looking responsive front page for an AI company,” it produced a mobile-friendly, properly functioning website after coding 958 lines.

    Global market implications

    The impact of China’s AI advances extends beyond technological achievement to financial markets. When DeepSeek launched its R1 model in January, America’s Nasdaq plunged 3.1%, while the S&P 500 fell 1.5%, demonstrating the wider economic significance of technological competition.

    The AI race presents opportunities and challenges for Asia and other regions. China’s low-cost, open-source model could help emerging economies develop AI innovation and entrepreneurship. It also pressures closed-source firms like OpenAI to reconsider their stance.

    Meanwhile, both superpowers are making massive investments in AI infrastructure. The Trump administration has unveiled the $500 billion Stargate Project, and China is projected to invest more than 10 trillion yuan (US$1.4 trillion) into technology by 2030.

    A double-edged sword for global technology

    The US-China tech rivalry risks deepening global divides, forcing nations to navigate growing complexities. Countries face difficult questions: How can they manage research partnerships with China without jeopardising collaboration with US institutions?

    How can nations reliant on Chinese materials and exports avoid Chinese technologies? South Korea, the world’s second-largest producer of semiconductors, labours with this dilemma. In 2023, it became more dependent on China for five of the six important raw materials needed for chip-making. Major firms like Toyota, SK Hynix, Samsung, and LG Chem remain vulnerable due to Chinese supply chain dominance. And, the climate implications of this AI race are significant.

    According to the Institute for Progress, maintaining AI leadership will require the United States to build five-gigawatt clusters in the next five years. By 2030, data centres could consume 10% of US electricity, more than double the 4% recorded in 2023.

    The path forward

    As the AI landscape evolves, DeepSeek’s arrival has challenged the assumption that US sanctions were constraining China’s AI sector. Washington’s semiconductor sanctions have proven to be what Lee Kai-fu calls a “double-edged sword” that created short-term challenges and forced Chinese firms to innovate under constraints.

    The rapid development of Chinese AI has reignited debates over US chip export controls. Critics argue that the present restrictions have accelerated China’s domestic innovation, as evidenced by DeepSeek’s development and improving capabilities.

    China is demonstrating remarkable resilience and innovation in the face of restrictions. As DeepSeek prepares to launch its R2 model potentially early, the technology gap continues to narrow.

    The post AI race intensifies: China narrows the gap appeared first on TechWire Asia.

    ]]>
    Nvidia chip crackdown: Malaysia under US pressure to stop AI reaching China https://techwireasia.com/2025/03/nvidia-chip-crackdown-malaysia-under-us-pressure-to-stop-ai-reaching-china/ Tue, 25 Mar 2025 15:29:21 +0000 https://techwireasia.com/?p=241587 Malaysia tightens semiconductor regulations amid Nvidia chip diversion to China. $390 million fraud case in Singapore reveals vulnerabilities in SE Asia supply chain. The Nvidia chip crackdown in Malaysia is intensifying. The country is apparently facing mounting pressure from the United States to prevent advanced semiconductors from being diverted to China. Malaysia’s Trade Minister Zafrul […]

    The post Nvidia chip crackdown: Malaysia under US pressure to stop AI reaching China appeared first on TechWire Asia.

    ]]>
  • Malaysia tightens semiconductor regulations amid Nvidia chip diversion to China.
  • $390 million fraud case in Singapore reveals vulnerabilities in SE Asia supply chain.
  • The Nvidia chip crackdown in Malaysia is intensifying. The country is apparently facing mounting pressure from the United States to prevent advanced semiconductors from being diverted to China.

    Malaysia’s Trade Minister Zafrul Aziz has confirmed the Malaysian government plans to tighten regulations on semiconductor movements in response to specific US demands to monitor high-end Nvidia chips entering the country. “[The US is] asking us to make sure that we monitor every shipment that comes to Malaysia when it involves Nvidia chips,” Aziz told the Financial Times [paywall]. “They want us to ensure that servers end up in the data centres they’re supposed to and not suddenly move to another ship.”

    The minister has formed a special task force with Digital Minister, Gobind Singh Deo, to strengthen regulations around Malaysia’s rapidly-growing data centre industry, which heavily relies on chips from industry leader Nvidia. The move comes amid heightened concerns in the US that Malaysia may be serving as a transit point for advanced AI chips ultimately destined for China, in violation of US export controls.

    Singapore fraud case highlights regional concerns

    The Malaysian moves follow closely on the heels of a major fraud investigation in neighbouring Singapore, where authorities have charged three individuals – two Singaporeans and one Chinese national – over trades in hardware servers allegedly worth approximately $390 million.

    During a press briefing in early March, Singapore’s Home Affairs Minister K Shanmugam stated that the servers in question “may contain Nvidia chips.” The case involves Dell and Supermicro servers imported from the US and subsequently sold to a company in Malaysia. “The question is whether Malaysia was a final destination or from Malaysia it went somewhere else, which we do not know for certain at this point,” Shanmugam said, adding that the Singaporean government had requested assistance from both the US and Malaysian authorities in its investigation.

    Two of the individuals charged – Alan Wei Zhaolun, 48, and Aaron Woon Guo Jie, 40 – hold senior positions at Aperia Cloud Services as CEO and COO respectively. According to its website, Aperia claims to be “Nvidia’s first qualified Nvidia Cloud Partner in Southeast Asia,” with “priority access to the highest-performing [graphics processing units] available in the market.” The third individual, a 51-year-old Chinese national named Li Miang, is accused of claiming fraudulently that the end user of items he purchased was a Singaporean computer equipment sales company, Luxuriate Your Life.

    US export controls on Nvidia chip and regional impact

    The increased scrutiny stems from broader US efforts to obstruct China’s development of advanced technologies, particularly AI with potential military applications. During the final days of the Biden administration in late 2024, the US introduced a three-tier licensing system for AI chips designed for use in data centres, explicitly targeting Nvidia’s powerful graphics processing units (GPUs). The measures were designed to prevent Chinese companies from circumventing US restrictions by accessing restricted chips through third countries. The US is also investigating if Chinese AI firm DeepSeek (which made headlines recently about its impressive AI model performance) has been using banned US chips.

    Malaysia’s growing data centre industry

    Malaysia has emerged as one of the fastest-growing global data centre development markets, with much of this growth concentrated in the southern state of Johor. According to Zafrul, the state has attracted over $25 billion in investment from major technology companies, including Nvidia, Microsoft, and ByteDance (TikTok’s parent company) in the past 18 months alone. The country recently agreed to form a special economic zone with Singapore, further embedding it as a key player in regional technology infrastructure. However, with the growth comes an increased responsibility to ensure compliance with international export controls.

    Challenges in enforcement

    Minister Zafrul has acknowledged the significant challenges in tracking semiconductors through complex global supply chains. “The US is also putting much pressure on their own companies to be responsible for ensuring [chips] arrive at their rightful destination,” he said. “Everybody’s been asked to play a role throughout the supply chain.” He emphasised the difficulty of enforcement, stating plainly, “Enforcement might sound easy, but it’s not.”

    Nvidia’s global sales patterns underscore the challenge. It generates nearly a quarter of its global sales through its Singapore office, raising attention in the US around potential hardware movements to China. Nvidia has maintained that almost all of these sales constitute invoicing of international companies through Singapore, with very few chips passing through the city-state.

    Regional context and industry impact

    The focus on semiconductor flows in Southeast Asia represents one aspect of broader technology trade restrictions in place. In a parallel development, the European Union recently sanctioned Splendent Technologies, a Singaporean chip distributor, as part of measures targeting companies allegedly helping Russia’s defence sector.

    Balancing economic development with regulatory compliance presents a practical challenge for Malaysia. The country’s efforts to strengthen monitoring systems must address complex supply chains while be supportive of its growing position in the regional technology ecosystem. As Malaysia implements new oversight measures, technology companies operating in the region may face additional compliance requirements stemming from Kuala Lumpur. However, the precise impact on the broader semiconductor industry will depend on the specific implementation approach and enforcement capacity.

    The post Nvidia chip crackdown: Malaysia under US pressure to stop AI reaching China appeared first on TechWire Asia.

    ]]>
    Is the US losing its edge in AI? https://techwireasia.com/2025/03/is-the-us-losing-its-edge-in-ai/ Mon, 24 Mar 2025 11:44:44 +0000 https://techwireasia.com/?p=241578 US AI firms warn America’s AI lead is shrinking to DeepSeek’s R1 and Ernie X1. OpenAI and Anthropic cite national security risk from Chinese AI models. Major US artificial intelligence companies, like OpenAI, Anthropic, and Google, have expressed concern over China’s increasing abilities in AI development. In submissions to the US government, the companies have […]

    The post Is the US losing its edge in AI? appeared first on TechWire Asia.

    ]]>
  • US AI firms warn America’s AI lead is shrinking to DeepSeek’s R1 and Ernie X1.
  • OpenAI and Anthropic cite national security risk from Chinese AI models.
  • Major US artificial intelligence companies, like OpenAI, Anthropic, and Google, have expressed concern over China’s increasing abilities in AI development.

    In submissions to the US government, the companies have warned America’s edge in AI is dwindling, as Chinese models like DeepSeek R1 become more advanced. The submissions were filed in response to a government request for input on an AI Action Plan, and were made in March 2025.

    China’s growing AI presence

    DeepSeek R1, the AI model from China, has drawn attention from US developers. OpenAI described DeepSeek as evidence that the technological gap between the US and China is closing. The corporation described DeepSeek as “state-subsidised, state-controlled, and freely available,” and expressed concerns about China’s ability to influence global AI development.

    OpenAI compared DeepSeek to Chinese telecommunications company Huawei, warning that Chinese regulations could allow the government to compel DeepSeek to compromise sensitive systems or important infrastructure.

    OpenAI also expressed worries about data privacy, pointing out that DeepSeek’s requirements for data-sharing with the Chinese government could strengthen the state’s surveillance abilities.

    Anthropic’s submission focused on biosecurity, noting that DeepSeek R1 “complied with answering most biological weaponisation questions, even when formulated with a clearly malicious intent.”

    The willingness to generate possibly dangerous information contrasts with the safety protocols the submissions describe as implemented in US-developed models.

    Competition goes beyond DeepSeek. Baidu, China’s largest search engine, recently launched Ernie X1 and Ernie 4.5, two new AI models designed to compete with leading Western systems. Ernie X1, a reasoning model, is said to match DeepSeek R1’s performance at half the cost. Meanwhile, Ernie 4.5 is priced at 1% of OpenAI’s GPT-4.5 and has outperformed it on certain benchmarks, according to Baidu.

    Both OpenAI and Anthropic framed the competition as ideological, describing it as a contest between “democratic AI” developed under Western principles and “authoritarian AI” shaped by state control. However, the recent success of Baidu and DeepSeek suggests that cost and accessibility may have a greater impact on global adoption than ideology.

    US AI security and infrastructure concerns

    The US companies’ submissions also raised their concerns about security and infrastructure challenges linked to the technology development. OpenAI’s submission focused on the dangers of Chinese state influence over AI models like DeepSeek, while Anthropic’s submission its emphasised biosecurity concerns tied to AI capabilities. The company disclosed that its own Claude 3.7 Sonnet model demonstrated improvements in biological weapon development, highlighting the dual-use nature of advanced AI systems. Anthropic also pointed to gaps in US export controls.

    While Nvidia’s H20 chips comply with US export restrictions, they still perform well in text generation – a key factor in reinforcement learning. Anthropic urged the government to strengthen these controls to prevent China from gaining an advantage.

    Google’s submission took a more balanced approach, acknowledging security risks while warning against over-regulation. The company argued that strict export controls could harm US economic competitiveness by creating barriers for domestic cloud providers and AI developers. Google suggested targeted controls to protect national security without disrupting business operations.

    All three businesses stressed the need for improved government oversight of AI security. Anthropic called for expanding the AI Safety Institute and strengthening the National Institute of Standards and Technology (NIST) to assess and mitigate AI-related security threats.

    Economic competitiveness and energy needs

    The submissions also focused on the economic factors shaping AI development. Anthropic stressed infrastructure challenges, warning that by 2027, training a single advanced AI model could require five gigawatts of power. The corporation proposed the building 50 gigawatts of AI-dedicated power capacity by 2027 and streamlining the power transmission line approval process.

    Baidu’s recent announcements have highlighted the importance of cost-effective AI development. Ernie 4.5 and X1 are reportedly available for a fraction of the cost of comparable Western models, with much lower token processing fees than OpenAI’s current models. Such pricing strategies from Chinese models could pressure US developers to reduce costs to remain competitive. OpenAI portrayed the competition as an ideological contest between Western and Chinese models’ arguing a free-market strategy would result in more innovation and better outcomes for consumers.

    Google’s stance in the submissions was more concerned with practical policy recommendations. The company called for increased federal investment in AI research, improved access to government contracts, and streamlined export controls.

    Regulatory strategies

    A unified approach to AI regulation emerged as a consistent theme across all three submissions. OpenAI proposed a regulatory framework managed by the Department of Commerce, claiming that fragmented domestic state-level regulations could drive AI development overseas. The company supported a tiered export control framework that would allow broader access to US-developed AI in countries considered democratic while restricting access in authoritarian states. Anthropic called for stricter export controls on AI hardware and training data, warning that even marginal improvements in model performance could provide strategic advantages to China.

    Google’s submission focused on copyright and intellectual property rights. The company argued that its current ‘fair use’-based policies are essential for AI development, and warned that overly strict copyright rules could disadvantage US firms compared to Chinese competitors.

    All three companies emphasised the need for faster governmental implementation of AI. OpenAI suggested removing existing testing and procurement processes, joining Anthropic’s advocacy of streamlined AI procurement processes from federal agencies. Google supported similar reforms, highlighting the importance of improved interoperability in government cloud infrastructure.

    Maintaining a competitive edge

    The submissions from OpenAI, Anthropic, and Google reflect a shared concern about maintaining US leadership in AI in the country as competition from China intensifies. The rise of DeepSeek R1 and Baidu’s latest models points to a growing challenge not just in technological capability but also in cost and accessibility.

    As AI development accelerates, the balance between security, economic growth, and technological leadership will likely remain key policy challenges.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.

    The post Is the US losing its edge in AI? appeared first on TechWire Asia.

    ]]>