Applications News Asia | Tech Wire Asia | Latest News & Updates https://techwireasia.com/category/applications/ Where technology and business intersect Mon, 14 Apr 2025 13:10:48 +0000 en-GB hourly 1 https://techwireasia.com/wp-content/uploads/2025/02/cropped-TECHWIREASIA_LOGO_CMYK_GREY-scaled1-32x32.png Applications News Asia | Tech Wire Asia | Latest News & Updates https://techwireasia.com/category/applications/ 32 32 First it was Ghibli, now it’s the AI Barbie Box trend https://techwireasia.com/2025/04/first-it-was-ghibli-now-its-the-ai-barbie-box-trend/ Mon, 14 Apr 2025 13:10:48 +0000 https://techwireasia.com/?p=241723 Following the Ghibli portraits, the AI Barbie trend comes to LinkedIn. Blending nostalgia with self-promotion, produces brand interest but little celebrity uptake. After gaining attention with Studio Ghibli-style portraits, ChatGPT’s image generator is now powering a new wave of self-representation online – this time with users turning themselves into plastic action figures. What began as […]

The post First it was Ghibli, now it’s the AI Barbie Box trend appeared first on TechWire Asia.

]]>
  • Following the Ghibli portraits, the AI Barbie trend comes to LinkedIn.
  • Blending nostalgia with self-promotion, produces brand interest but little celebrity uptake.
  • After gaining attention with Studio Ghibli-style portraits, ChatGPT’s image generator is now powering a new wave of self-representation online – this time with users turning themselves into plastic action figures.

    What began as a quirky trend on LinkedIn has now spread to platforms like Instagram, TikTok, Facebook, and X. The trend includes different takes, but the “AI Action Figure” version is among the most common. It typically shows a person recreated as a doll encased in a plastic blister pack, often accessorised with work-related items like laptops, books, or coffee mugs. That’s fitting, considering the trend’s initial traction among professionals and marketers on LinkedIn.

    Other versions draw inspiration from more recognisable aesthetics, like the “Barbie Box Challenge,” where the AI-generated figure is styled to resemble a vintage Barbie.

    The rise of the virtual dolls follows the earlier success of the Studio Ghibli-style portraits, which pushed ChatGPT’s image capabilities into the spotlight. That earlier trend sparked some backlash related to environmental, copyright, and creative concerns – but so far, the doll-themed offshoot hasn’t drawn the same level of criticism.

    What’s notable about the trends is the consistent use of ChatGPT as the generator of choice. OpenAI’s recent launch of GPT-4o, which includes native image generation, attracted such a large volume of users that the firm had to temporarily limit image output and delay rollout for free-tier accounts.

    While the popularity of action figures hasn’t yet matched that of Ghibli portraits, it does highlight ChatGPT’s role in introducing image tools to a broader user base. Many of these doll images are shared by users with low engagement, and mostly in professional circles. Some brands, including Mac Cosmetics and NYX, have posted their own versions, but celebrities and influencers have largely stayed away. One notable exception is US Representative Marjorie Taylor Greene, who shared a version of herself with accessories including a Bible and a gavel, calling it “The Congresswoman MTG Starter Kit.”

    What the AI Barbie trend looks like

    The process involves uploading a photo into ChatGPT and prompting it to create a doll or action figure based on the image. Many users opt for the Barbie aesthetic, asking for stylised packaging and accessories that reflect their personal or professional identity. The final output often mimics retro Barbie ads from the 1990s or early 2000s. Participants typically specify details like:

    • The name to be displayed on the box
    • Accessories, like pets, smartphones, or coffee mugs
    • The desired pose, facial expression, or outfit
    • Packaging design elements like colour or slogans

    Users often iterate through several versions, adjusting prompts to better match their expectations. The theme can vary widely – from professional personas to hobbies or fictional characters – giving the trend a broad creative range.

    How the trend gained momentum

    The idea gained visibility in early 2025, beginning on LinkedIn where users embraced the “AI Action Figure” format. The Barbie-style makeover gained traction over time, tapping into a blend of nostalgia and visual novelty. Hashtags like #aibarbie and #BarbieBoxChallenge have helped to spread the concept. While the Barbie-inspired version has not gone as viral as the Ghibli-style portraits, it has maintained steady traction on social media, especially among users looking for lighthearted ways to express their personal branding.

    https://youtube.com/watch?v=Z6S6zQQ8sCQ%3Fsi%3DPJOwLgHWngf21YhL

    Using ChatGPT’s image tool

    To participate, users must access ChatGPT’s image generation tool, available with GPT-4o. The process begins by uploading a high-resolution photo – preferably full-body – and supplying a prompt that describes the desired figurine.

    To improve accuracy, prompts usually include:

    • A theme (e.g., office, workout, fantasy)
    • Instructions for how the figure should be posed
    • Details about clothing, mood, or accessories
    • A note to include these elements inside a moulded box layout

    Reiterating the intended theme helps ensure consistent results. While many focus on work-related personas, the style is flexible – some choose gym-themed versions, others opt for more humorous or fictional spins.

    Behind the spike in image activity

    ChatGPT’s image generation tool launched widely in early 2025, and its use quickly surged. According to OpenAI CEO Sam Altman, the demand became so intense that GPU capacity was stretched thin, prompting a temporary cap on image generation for free users. Altman described the load as “biblical demand” in a social media post, noting that the feature had drawn more than 150 million active users in its first month. The tool’s ability to generate everything from cartoons to logos – and now custom action figures – has played a central role in how users explore visual identity through AI.

    The post First it was Ghibli, now it’s the AI Barbie Box trend appeared first on TechWire Asia.

    ]]>
    Viral Ghibli feature drives ChatGPT surge—What you should know before uploading photos https://techwireasia.com/2025/04/viral-ghibli-feature-drives-chatgpt-surge/ Tue, 08 Apr 2025 13:04:25 +0000 https://techwireasia.com/?p=241676 Ghibli-style art pushes ChatGPT’s activity to new highs. OpenAI says working to scale capacity for GPT-4o image tools. ChatGPT’s internet traffic has skyrocketed due to a spike in interest in AI-generated images styled after Studio Ghibli animations. OpenAI noticed a large increase in engagement following the release of its image-generation tool, which enables users to […]

    The post Viral Ghibli feature drives ChatGPT surge—What you should know before uploading photos appeared first on TechWire Asia.

    ]]>
  • Ghibli-style art pushes ChatGPT’s activity to new highs.
  • OpenAI says working to scale capacity for GPT-4o image tools.
  • ChatGPT’s internet traffic has skyrocketed due to a spike in interest in AI-generated images styled after Studio Ghibli animations.

    OpenAI noticed a large increase in engagement following the release of its image-generation tool, which enables users to create artwork reminiscent of classic titles like Spirited Away and My Neighbor Totoro. Data from Similarweb shows that weekly active users passed 150 million for the first time this year.

    OpenAI CEO Sam Altman said on social media that the platform added one million users in a single hour – surpassing previous growth records. SensorTower reported that downloads and revenue through the ChatGPT app also increased. Weekly downloads rose by 11%, active users by 5%, and in-app purchase revenue by 6% compared to the previous month.

    The rapid increase in use put pressure on the platform’s infrastructure. Users reported slowdowns and brief outages, forcing Altman to caution that future features may face delays while OpenAI manages capacity

    ChatGPT's weekly average users hit record high (Source - Similarweb)
    ChatGPT’s weekly average users hit record high (Source – Similarweb)

    Legal and copyright concerns with the ChatGPT x Ghibli

    The viral trend has prompted discussion around copyright. Some legal experts have raised questions about whether closely-replicating distinctive animation styles could cross into infringement.

    “The legal landscape of AI-generated images mimicking Studio Ghibli’s distinctive style is an uncertain terrain. Copyright law has generally protected only specific expressions rather than artistic styles themselves,” said Evan Brown, a partner at law firm Neal & McDevitt.

    OpenAI did not respond to questions about how its models were trained or whether copyrighted materials influenced its image generator. Studio Ghibli has not issued a formal statement, but commentary from its co-founders has resurfaced.

    Hayao Miyazaki’s 2016 reaction to an early AI-generated image drew attention last week. In a widely circulated video, he described the technology as “an insult to life itself.” The full clip shows him responding specifically to a zombie-like AI render, which he called “extremely unpleasant.”

    In a recent interview, Studio Ghibli’s managing director Goro Miyazaki acknowledged the growing capabilities of AI. He claimed that AI-generated films could become a reality in the coming years, but questioned whether audiences would embrace them. He also acknowledged that while new technology could lead to new creative voices, it may be difficult to replicate the sensibilities of previous generations. “Nowadays, the world is full of opportunities to watch anything, anytime, anywhere,” he said, suggesting that younger artists may not share the same experiences that shaped Ghibli’s earlier works.

    Studio concerns and industry shifts

    Japan faces a shortage of trained animators, in part due to long hours and low wages in the industry. Goro noted that Gen Z may be less inclined to pursue the traditionally labour-intensive career path of hand-drawn animation.

    AI tools are emerging as a faster, lower-cost alternative to visual storytelling. Studio Ghibli’s legacy includes a number of films that blend fantastical themes with personal and historical reflections. Miyazaki’s latest work, The Boy and the Heron, earned an Academy Award and may be his final project. Goro has contributed his own directorial efforts, including Tales from Earthsea and From Up on Poppy Hill, and helped develop the Ghibli Museum and Ghibli Park.

    User privacy and data security

    As more users upload personal images to generate stylised portraits, privacy advocates are raising concerns about how that data is collected and used. “When you upload a photo to an AI art generator, you’re giving away your biometric data (your face). Some AI tools store that data, use it to train future models, or even sell it to third parties – none of which you may be fully aware of unless you read the fine print,” said Christoph C. Cemper, founder of AIPRM.

    OpenAI’s privacy policy confirms the platform collects user-provided and automatically generated data, including images. Unless users opt out or request data deletion, content may be retained and used to train future models.

    Cemper said that uploaded images could be misused. Personal data may appear in public datasets, like LAION-5B, which has been linked to the training of tools like Stable Diffusion and Google Imagen. One reported case involved a user finding private medical images in a public dataset. Cemper said that AI-generated content has already been used to produce fabricated documents and images, adding that deepfake risks are increasing. “There are too many real-world verification flows that rely on ‘real images’ as proof. That era is over,” one user wrote on social media.

    Navigating licensing and user rights between ChatGPT and Ghibli

    Cemper urged users to be aware of broad licensing terms buried in AI platform policies. Terms like “non-exclusive,” “royalty-free,” and “irrevocable license” can give platforms broad rights over uploaded content. The rights may extend even after the user stops using the service.

    Creating AI art in the style of well-known brands could also present legal challenges. Artistic styles like those of Studio Ghibli, Disney, and Pixar are closely associated with their original creators, and mimicking them may fall under derivative work protections.

    In late 2022, several artists filed lawsuits against AI firms, alleging their work was used without permission to train image generators. The ongoing legal challenges highlight the tension between creative freedom and intellectual property rights.

    Cemper added: “The rollout of ChatGPT’s 4o image generator shows just how powerful AI has become as it replicates iconic artistic styles with just a few clicks. But this unprecedented capability comes with a growing risk – the lines between creativity and copyright infringement are increasingly blurred, and the risk of unintentionally violating intellectual property laws continues to grow. While these trends may seem harmless,creators must be aware that what may appear as a fun experiment could easily cross into legal territory.

    “The rapid pace of AI development also raises significant concerns about privacy and date security. With more users engaging with AI tools, there’s a pressing need for clearer, more transparent privacy policies. Users should be empowered to make informed decisions about uploading their photos or personal data – especially when they may not realise how their information is being stored, shared, or used.”

    The post Viral Ghibli feature drives ChatGPT surge—What you should know before uploading photos appeared first on TechWire Asia.

    ]]>
    OpenAI and Google seek approval to train AI on content without permission https://techwireasia.com/2025/03/openai-and-google-seek-approval-to-train-ai-on-content-without-permission/ Tue, 18 Mar 2025 11:17:51 +0000 https://techwireasia.com/?p=241503 OpenAI and Google ask US government to allow AI to train on copyright materials. Urge adoption of copyright exemptions for ‘national security.’ OpenAI and Google are pushing the US government to allow AI models to train on copyrighted material, arguing that ‘fair use’ is critical for maintaining the country’s competitive edge in artificial intelligence. Both […]

    The post OpenAI and Google seek approval to train AI on content without permission appeared first on TechWire Asia.

    ]]>
  • OpenAI and Google ask US government to allow AI to train on copyright materials.
  • Urge adoption of copyright exemptions for ‘national security.’
  • OpenAI and Google are pushing the US government to allow AI models to train on copyrighted material, arguing that ‘fair use’ is critical for maintaining the country’s competitive edge in artificial intelligence.

    Both companies outlined their positions in proposals submitted this week in response to a request from the White House for input on President Donald Trump’s “AI Action Plan.”

    OpenAI’s national security argument

    According to OpenAI, allowing AI companies to use copyrighted material for training is a national security issue. The company warned that if US firms are restricted from accessing copyrighted data, China could outperform the US in AI development.

    OpenAI specifically highlighted the rise of DeepSeek as evidence that Chinese developers have unrestricted access to data, including copyrighted material. “If the PRC’s developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over,” OpenAI stated in its filing.

    Google’s position on copyright and fair use

    Google supported OpenAI’s stance, arguing that copyright, privacy, and patent laws could create barriers to AI development if they restrict access to data.

    The company highlighted that fair use protections and text and data mining exceptions have been crucial for training AI models using publicly available content. “These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders,” Google said. Without these protections, developers could face “highly unpredictable, imbalanced, and lengthy negotiations” with data holders during model development and research.

    Google also revealed a broader strategy to strengthen the US’s competitiveness in AI. The corporation called for increased investment in AI infrastructure, including addressing rising energy demands and establishing export controls to preserve national security while supporting AI exports to foreign markets.

    It emphasised the need for collaboration between federal and local governments to support AI research through partnerships with national labs and improving access to computational resources.

    Google recommended the US government take the lead in adopting AI, suggesting the implementation of multi-vendor AI solutions and streamlined procurement processes for emerging technologies. It warned that policy decisions will shape the outcome of the global AI race, urging the government to adopt a “pro-innovation” approach that protects national security.

    Anthropic’s focus on security and infrastructure

    Anthropic, the developer of the Claude chatbot, also submitted a proposal but did not add to the statements on copyright. Instead, the company called on the US government to create a system for assessing national security risks tied to AI models and strengthen export controls on AI chips. It also urged investment in energy infrastructure to support AI development, pointing out that AI models’ energy demands will continue to grow.

    Copyright lawsuits and industry concerns

    The proposals come as AI companies face increasing legal challenges over the use of copyrighted material. OpenAI is currently dealing with lawsuits from major news organisations, including The New York Times, and from authors like Sarah Silverman and George R.R. Martin. These cases allege that OpenAI used content, without permission, to train its models.

    Other AI firms, including Apple, Anthropic, and Nvidia, have also been accused of using copyrighted material. YouTube has claimed that these companies violated its terms of service by scraping subtitles from its platform to train AI models in a remarkable instance of the pot calling the kettle black.

    Industry pressure to clarify copyright rules

    AI developers worry that restrictive copyright policies could disadvantage US firms, as China and other nations continue to invest heavily in AI without strictures placed on use of materials. Content creators and rightsholders disagree, claiming that AI businesses should not be able to use their work without fair compensation.

    The White House’s AI Action Plan is expected to set the foundation for future US policy on AI development and data access, with potential implications for both the technology sector and content industries.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.

    The post OpenAI and Google seek approval to train AI on content without permission appeared first on TechWire Asia.

    ]]>
    How AI is changing recruitment and upskilling: Insights from LinkedIn https://techwireasia.com/2025/03/how-ai-is-changing-recruitment-and-upskilling-insights-from-linkedin/ Wed, 12 Mar 2025 12:10:49 +0000 https://techwireasia.com/?p=241461 LinkedIn: AI is reshaping recruitment. LinkedIn’s Hiring Assistant aims to improve hiring efficiency. AI has the ability to transform the way companies hire, develop talent, and engage with candidates. Tech Wire Asia spoke with Hari Srinivasan, Vice President of Product at LinkedIn, about how AI is transforming recruitment and professional development. AI’s role in enhancing […]

    The post How AI is changing recruitment and upskilling: Insights from LinkedIn appeared first on TechWire Asia.

    ]]>
  • LinkedIn: AI is reshaping recruitment.
  • LinkedIn’s Hiring Assistant aims to improve hiring efficiency.
  • AI has the ability to transform the way companies hire, develop talent, and engage with candidates. Tech Wire Asia spoke with Hari Srinivasan, Vice President of Product at LinkedIn, about how AI is transforming recruitment and professional development.

    AI’s role in enhancing recruitment efficiency

    Recruiters typically spend more time on administrative work than connecting with candidates. According to Srinivasan, nearly half (47%) of recruiters in Asia-Pacific spend one to three hours a day analysing applications, a time-consuming task that AI could streamline.

    Hari Srinivasan, LinkedIn's Vice President of Product
    Hari Srinivasan, LinkedIn’s Vice President of Product

    LinkedIn’s new Hiring Assistant is designed to handle repetitive tasks, letting recruiters focus on more strategic work like advising hiring managers and improving candidate engagement. “Every recruiter I talk to always tells me there’s this ‘magic moment’ that comes together when they get the perfect person to the perfect job. But most of the day isn’t spent doing that. It’s spent following up with hiring managers, filling out paperwork, or reviewing job descriptions,” Srinivasan said.

    Hiring Assistant, currently in beta testing with select customers – including some in Singapore – automates key recruitment tasks. It allows recruiters to concentrate on high-value activities like creating relationships with candidates and providing improved hiring experiences.

    Overcoming challenges in AI-driven upskilling

    AI plays a growing role in professional development, but companies face challenges to implement effective upskilling programs. According to Srinivasan, 63.7% of APAC HR professionals struggle to find tailored learning resources, 50.8% of HR professionals are uncertain about which skills will be most valuable in the future, while 55% report a lack of mentorship and career coaching.

    “Employees are eager to learn,” Srinivasan said. “Global learning content consumption on LinkedIn has grown by 13% year-over-year, with countries like India (37%) and Indonesia (59%) leading the way in skill development.” Companies are starting to respond by making AI training more accessible and relevant. LG Electronics, for example, uses LinkedIn Learning to provide tailored and flexible training programs, with 67% of its employees taking part in training each month.

    Preparing for an AI-driven job market

    AI adoption is creating new demands on the workforce, notably in soft skills. While technical skills remain important, HR professionals in APAC report that the hardest-to-find skills include technical fluency (36%), leadership (35%), and communication and problem-solving (34.5%).

    “As AI adoption accelerates, professionals have a significant opportunity to invest in their growth – not just in AI skills, but also in human capabilities,” Srinivasan said. He highlighted LinkedIn Learning’s AI-powered coaching tool as one method firms use to help employees build these skills. It allows users to practise real-world scenarios like delivering performance reviews and giving feedback.

    HR teams are responding by balancing AI training with soft skills development. In APAC, 78.3% of HR leaders are prioritising AI upskilling while also investing in important collaboration and communication skills.

    How candidates can stand out to recruiters

    With hiring expected to become more selective in 2025, job seekers will need to demonstrate their value beyond meeting basic qualifications. According to Srinivasan, candidates should keep their LinkedIn profiles updated with relevant skills and certifications to reflect their continuous learning. “Pro tip: people who list five or more skills on their profile receive up to 5.6x more profile views from recruiters,” Srinivasan said

    He advises job seekers to highlight their core skills and achievements to stand out. Building a strong professional network and engaging with industry content can also improve visibility and increase the chances of being noticed by recruiters.

    Ensuring fairness and reducing bias in AI-powered hiring

    AI-driven recruitment tools offer efficiency, but fairness and transparency remain important challenges. Srinivasan said LinkedIn’s Hiring Assistant evaluates explicit and implicit capabilities listed on a candidate’s profile, helping guide hiring decisions so they’re based on verifiable qualifications rather than traditional markers like educational background or firm affiliations.

    “With AI paired with our platform insights, we can help recruiters find professionals based on their skills rather than where they worked or went to school,” he said.

    LinkedIn reviews its algorithms continuously to detect and eliminate unintended biases in hiring processes. This includes identifying factors that may accidentally exclude certain candidates and adjusting models to ensure a more inclusive and balanced evaluation process. “If harmful biases are identified, we take immediate steps to address them, ensuring that the recruitment process remains inclusive, fair, and aligned with human values,” Srinivasan said.

    Expanding AI’s reach across Microsoft’s ecosystem

    LinkedIn’s AI initiatives reflect broader changes in the Microsoft (LinkedIn’s owner) ecosystem. The MAI models Microsoft has been developing could improve LinkedIn’s recruitment and upskilling tools, in addition to its other software, such as Teams and Azure. AI models could offer real-time transcription, language translation, and meeting summaries in Teams, while for Azure, AI-driven automation could help enterprise clients.

    For LinkedIn, AI-based job matching and recruitment insights are strengthening professional networking and users’ career development. Srinivasan understands that while AI can handle repetitive tasks and provide insight, the human element remains essential to make hiring decisions and build meaningful connections between the network’s users.

    The post How AI is changing recruitment and upskilling: Insights from LinkedIn appeared first on TechWire Asia.

    ]]>
    Microsoft develops in-house AI models to compete with OpenAI https://techwireasia.com/2025/03/microsoft-develops-in-house-ai-models-to-compete-with-openai/ Tue, 11 Mar 2025 10:02:51 +0000 https://techwireasia.com/?p=241428 Microsoft is developing in-house AI models, called MAI. The performance is comparable to models from OpenAI and Anthropic. According to a person familiar with the matter, Microsoft is working on in-house AI models that could compete with those from industry leaders like its partner, OpenAI. Microsoft has tested a family of models it calls MAI, […]

    The post Microsoft develops in-house AI models to compete with OpenAI appeared first on TechWire Asia.

    ]]>
  • Microsoft is developing in-house AI models, called MAI.
  • The performance is comparable to models from OpenAI and Anthropic.
  • According to a person familiar with the matter, Microsoft is working on in-house AI models that could compete with those from industry leaders like its partner, OpenAI.
    Microsoft has tested a family of models it calls MAI, which reportedly produced results on a par with state-of-the-art AI models from OpenAI and Anthropic. Redmond is looking at how these models might support products like its Copilot-branded AI assistants, which handle user queries and provide suggestions for tasks like document editing and conference calls.

    In addition to MAI, Microsoft is working on reasoning models designed to tackle complex problems and simulate human-like decision-making. OpenAI, Anthropic, and Alphabet are also developing similar models.

    Microsoft incorporated OpenAI’s o1 reasoning model into its Copilot products last month. A Microsoft spokesperson stated that the company continues to use a mix of models from OpenAI, Microsoft AI, and open-source sources to support its products.

    Reducing dependence on OpenAI

    The development of MAI models reflects Microsoft’s broader effort to reduce reliance on OpenAI. It has invested around $13 billion in OpenAI since forming a partnership in 2019, which gave OpenAI access to Microsoft’s Azure cloud platform to power its AI research and development. The partnership between the two companies was renegotiated in January, allowing OpenAI to use cloud services from competitors unless Microsoft claims the business for itself. The updated agreement runs until 2030.

    Amy Hood, Microsoft’s Chief Financial Officer, recently spoke about the partnership at a Morgan Stanley conference. “We’re both successful when each of us are successful,” Hood said. “So as you go through that process, I do think everybody’s planning for what happens for a decade, or two decades. And that’s important for both of us to do.”

    OpenAI’s role in Microsoft’s products

    >Since the partnership began, OpenAI’s models have been integrated into Microsoft products, including Office, GitHub Copilot, and Bing Search. Microsoft’s AI infrastructure is primarily hosted on Azure, and the company also collaborates with OpenAI on AI supercomputing and large language models (LLMs). “We feel great about having leading models from OpenAI, we’re still incredibly proud of that,” Hood previously said. “But we also have other models, including ones we build, to make sure that there’s choice.”

    Expanding AI model options

    Alongside OpenAI’s models, Microsoft has developed a set of smaller in-house models called Phi, and tested AI models from other providers, including Anthropic, DeepSeek, Meta, and Elon Musk’s xAI, to evaluate their performance in the Copilot framework.

    Anthropic’s Claude is known for its focus on safety and alignment with human values. The company has secured significant investment, raising its valuation to $61.5 billion. Google’s Gemini, developed by DeepMind, is a multimodal model capable of processing text, images, audio, and video simultaneously. Google has positioned it as a competitor to OpenAI’s GPT-4, with multiple versions like Gemini Ultra, Pro, and Nano tailored to different use cases.

    Meta’s LLaMA series is an open-source model designed to foster transparency and accessibility for developers. Meta’s focus has been on creating conversational AI with natural interactions. xAI’s Grok, integrated into the X platform, focuses on real-time information and conversational AI. It is positioned as a direct competitor to ChatGPT and other conversational models.

    Microsoft’s decision to develop its own models reflects the growing demand for diversified AI capabilities. By expanding its model portfolio, the company aims to offer more flexibility and reduce its dependence on a single partner. The development of MAI models positions Microsoft alongside other major players in the AI market, increasing its ability to respond to shifting industry demands and technological advances.

    A balanced approach to AI development

    Microsoft’s strategy to combine in-house models with external solutions highlights its effort to balance independence with strategic partnerships. OpenAI remains an important partner, but the development of MAI models positions Microsoft to adapt to shifts in the AI market and meet increasing demand for more versatile AI solutions.

    The post Microsoft develops in-house AI models to compete with OpenAI appeared first on TechWire Asia.

    ]]>
    Indosat becomes first mobile operator in SEA to roll out AI-RAN with Nokia and NVIDIA https://techwireasia.com/2025/03/indosat-becomes-first-mobile-operator-in-sea-to-roll-out-ai-ran-with-nokia-and-nvidia/ Mon, 10 Mar 2025 08:13:39 +0000 https://techwireasia.com/?p=241404 Indosat Ooredoo Hutchison deploys AI-RAN in Southeast Asia with Nokia and NVIDIA. The AI-RAN solution combines Nokia’s 5G Cloud RAN and NVIDIA AI Aerial. At MWC 2025, Indosat Ooredoo Hutchison became the first mobile operator in Southeast Asia to deploy AI-RAN (Artificial Intelligence Radio Access Network), in collaboration with Nokia and NVIDIA. The deployment integrates […]

    The post Indosat becomes first mobile operator in SEA to roll out AI-RAN with Nokia and NVIDIA appeared first on TechWire Asia.

    ]]>
  • Indosat Ooredoo Hutchison deploys AI-RAN in Southeast Asia with Nokia and NVIDIA.
  • The AI-RAN solution combines Nokia’s 5G Cloud RAN and NVIDIA AI Aerial.
  • At MWC 2025, Indosat Ooredoo Hutchison became the first mobile operator in Southeast Asia to deploy AI-RAN (Artificial Intelligence Radio Access Network), in collaboration with Nokia and NVIDIA. The deployment integrates Nokia’s 5G Cloud RAN solution with NVIDIA AI Aerial, creating what the companies term a unified computing infrastructure that hosts both AI and RAN workloads.

    AI and telecom convergence

    Indosat is the world’s third operator to deploy AI-RAN commercially. The recent initiative combines AI and wireless connectivity to improve network performance, efficiency, and service capabilities. As part of the partnership, the companies have signed an MOU to develop, test, and deploy AI-RAN solutions. The initial focus will be on AI inferencing workloads using NVIDIA AI Aerial, followed by the full integration of RAN workloads on the same platform.

    Indosat, Nokia, and NVIDIA will work with Indonesian universities and research institutes to advance AI-driven telecom applications, support academic research and student training, and drive innovation in network optimisation, spectral efficiency, and energy management.

    AI-RAN’s role in network transformation

    The AI-RAN infrastructure is expected to change Indosat’s network strategy, letting the company share infrastructure costs for multiple applications and introduce AI-powered services. The integration’s aims of improving spectral efficiency and reduction in energy use laying the groundwork for future 6G improvements.

    The initiative is in line with Indonesia’s national AI strategy, establishing Indosat as an enabler of AI services rather than just a telecom provider. The company has established a ‘Sovereign AI Factory’ in Indonesia, designed to support startups, enterprises, and government organisations in developing AI applications for healthcare, education, and agriculture. With NVIDIA AI Enterprise software and serverless APIs, Indosat plans to scale AI inferencing for Indonesia’s population of 277 million, optimising AI workloads across the network.

    Indosat becomes first mobile operator in SEA to roll out AI-RAN with Nokia and NVIDIA
    Bottom row, from left: Ronnie Vasishta, SVP Telecoms at Nvidia, Tommi Uitto, president of Mobile Networks at Nokia and Vikram Sinha, president director and CEO of Indosat.

    Expanding AI capabilities across applications

    The provided serverless API framework, created in collaboration with NVIDIA, will allow Indosat’s AI partners, including Hippocratic.ai, Personal.ai, GoTo, and Accenture, to deploy distributed inference engines on a large scale.

    Indosat President Director and CEO Vikram Sinha made clear the broader impact of AI integration in telecom, saying, “By embedding AI into our radio access network, we’re not just enhancing connectivity – we’re building a nationwide AI-powered ecosystem that will fuel innovation across industries. This aligns with our mission to connect and empower every Indonesian.”

    Deployment roadmap

    The AI-RAN rollout will follow a phased approach:

    • Early 2025: A 5G AI-RAN lab established in Surabaya to support development, testing, and validation.
    • Second half of 2025: Launch of a small-scale commercial pilot to test AI inferencing workloads running on the NVIDIA AI-RAN infrastructure.
    • 2026: Broader expansion of AI-RAN deployment.

    Industry perspectives

    Tommi Uitto, President of Mobile Networks at Nokia, stated: “When you combine AI with RAN, you create an engine for future innovation. With our 5G Cloud RAN platform, Indosat can transform its network into a multi-purpose computing grid that uses the synergies of AI-accelerated computing. With our AI-powered products, we help Indosat augment RAN capabilities for enhanced performance, operational efficiency, advanced automation and optimised energy efficiency.”

    Ronnie Vasishta, SVP Telecoms at NVIDIA said: “The combination of Indosat’s vision for a nationwide AI grid and NVIDIA AI expertise and full-stack software and hardware platform will catalyse AI adoption and innovation across Indonesia, creating a new playbook for telecom operators worldwide.”

    The post Indosat becomes first mobile operator in SEA to roll out AI-RAN with Nokia and NVIDIA appeared first on TechWire Asia.

    ]]>
    Nvidia offers AI model for large-scale genetic analysis https://techwireasia.com/2025/02/nvidia-introduces-ai-model-for-large-scale-genetic-analysis/ Fri, 21 Feb 2025 12:17:01 +0000 https://techwireasia.com/?p=239882 Nvidia and research partners introduce Evo 2. Evo 2 can identify disease-causing mutations and assist in synthetic genome design. Nvidia and its research partners have developed an artificial intelligence model designed to analyse genetic sequences at an unprecedented scale. Announced on February 19, the Evo 2 AI is built to read and design genetic code […]

    The post Nvidia offers AI model for large-scale genetic analysis appeared first on TechWire Asia.

    ]]>
  • Nvidia and research partners introduce Evo 2.
  • Evo 2 can identify disease-causing mutations and assist in synthetic genome design.
  • Nvidia and its research partners have developed an artificial intelligence model designed to analyse genetic sequences at an unprecedented scale.

    Announced on February 19, the Evo 2 AI is built to read and design genetic code from different life forms. By finding patterns in DNA and RNA sequences, Evo 2 can process biological data in ways that would take researchers years of manual work.

    The model was designed to detect disease-causing mutations in human genes, and it can also generate synthetic genomes as complex as those found in bacteria. Scientists believe that the model’s ability to analyse data at scale could speed research in medicine, genetics, and bio-engineering.

    Expanding AI’s role in biology

    Evo 2 builds on its predecessor, Evo 1, which focuses on single-cell genomes. The newer version has been trained on 9.3 trillion nucleotides sourced from more than 128,000 whole genomes. Nucleotides are the fundamental components of genetic material.

    The model also examines metagenomic data, expanding its knowledge base beyond bacteria, archaea, and phages to include genetic information from humans, plants, and multi-cellular species.

    According to the researchers, such a model can recognise complex patterns in genetic sequences that would be difficult for traditional methods to detect. One of its primary applications is to identify dangerous mutations, like those associated with genetic illnesses.

    In early tests, Evo 2 correctly identified 90% of potentially harmful mutations in BRCA1, a breast cancer-linked gene. Scientists believe that this capability could support the development of targeted gene therapies, allowing treatments to target only specific cells while lowering the risk of unintended genetic modifications.

    Patrick Hsu, co-founder of the Arc Institute and senior researcher on Evo 2, described the model as a step toward generative biology, in which AI can “read, write, and think in the language of nucleotides.” He said Evo 2 has a wide understanding of genetic structures, making it useful for tasks like identifying disease-causing mutations, and designing artificial genetic sequences for scientific research.

    Computing power behind Evo 2

    Evo 2 was trained over several months using Nvidia DGX Cloud AI on AWS infrastructure, and used 2,000 Nvidia H100 GPUs. The model is capable of processing genetic sequences of up to 1 million nucleotides at once, allowing it to analyse complex relationships across entire genomes. To support this degree of processing, researchers developed a new AI architecture called StripedHyena 2, which is designed to handle large-scale biological datasets efficiently.

    According to the team, the architecture enabled Evo 2 to process 30 times more data than Evo 1 and analyse eight times more nucleotides. Greg Brockman, co-founder of OpenAI, worked on the project during a sabbatical, helping to optimise the AI for large-scale biological research.

    Applications beyond medicine

    While Evo 2 has shown promise in medical research, scientists believe the model could also help progress in fields such as agriculture, environmental science, and synthetic biology. Some potential applications might include:

    • Developing crops that are more resilient to climate change, with improved resistance to drought, pests, and extreme weather conditions.
    • Engineering organisms capable of breaking down environmental pollutants, offering new approaches to reducing industrial and agricultural waste.
    • Studying genetic adaptations in different species to better understand evolutionary biology and biodiversity.

    Collaborative research effort

    The project used Nvidia’s computing capabilities with research from the Arc Institute, a nonprofit organisation dedicated to addressing long-term scientific concerns. The institute was established in 2021 with $650 million in funding, and works with Stanford University, UC Berkeley, and UC San Francisco to advance research in bio-engineering, medicine, and genetics.

    Evo 2 is now freely available to researchers worldwide through Nvidia’s BioNeMo research platform, which includes various AI-powered tools for analysing and modelling biological data. By making the model accessible, the research team hopes to speed innovation in genomics, synthetic biology, and other fields that rely on large-scale genetic analysis.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.

    The post Nvidia offers AI model for large-scale genetic analysis appeared first on TechWire Asia.

    ]]>
    Can AI and spatial content give the Apple Vision Pro a second wind? https://techwireasia.com/2025/02/can-ai-and-spatial-content-give-the-apple-vision-pro-a-second-wind/ Tue, 18 Feb 2025 15:12:57 +0000 https://techwireasia.com/?p=239869 Apple is bringing Apple Intelligence’s Writing Tools and Genmojis to the Vision Pro. Reportedly shifting focus to a lower-cost model. Apple is gearing up to give its Vision Pro headset a much-needed shot in the arm, hoping to spark fresh interest in the $3,500 device that, let’s be honest, hasn’t exactly flown off the shelves. […]

    The post Can AI and spatial content give the Apple Vision Pro a second wind? appeared first on TechWire Asia.

    ]]>
  • Apple is bringing Apple Intelligence’s Writing Tools and Genmojis to the Vision Pro.
  • Reportedly shifting focus to a lower-cost model.
  • Apple is gearing up to give its Vision Pro headset a much-needed shot in the arm, hoping to spark fresh interest in the $3,500 device that, let’s be honest, hasn’t exactly flown off the shelves. The company is planning a visionOS 2.4 update that could arrive as soon as April, according to people familiar with the plans. Developers might even get their hands on a beta version this week.

    Leading the charge is the arrival of Apple Intelligence – Apple’s in-house AI system – on the Vision Pro. It’s the first time these features are extending beyond iPhones, iPads, and Macs. Vision Pro owners can expect tools like Writing Tools, Genmojis, and the Image Playground app, all powered by the headset’s M2 chip and 16GB of memory, enabling smooth on-device AI processing.

    The timing isn’t random. Apple is facing stiff competition: Google recently unveiled Android XR, a mixed-reality operating system built around its Gemini AI, with Samsung gearing up to launch a headset running the platform later this year – a device that, going from leaked images, looks suspiciously like Apple’s Vision Pro.

    But for all the AI upgrades, the bigger story may be Apple’s struggle to figure out where the Vision Pro fits. Over the past year, sales have been slower than hoped – hardly a shock given the steep price. Even Apple CEO Tim Cook described the headset as an “early-adopter product,” aimed at people who want “tomorrow’s technology today.”

    There’s even talk that production is winding down. A report from The Information suggested that Apple might stop making the current Vision Pro soon, although it has enough supply to meet demand for now. Apple’s attention, it seems, is shifting toward what comes next – though exactly what that is remains hazy.

    What’s next for Vision Pro?

    Apple’s roadmap for its mixed-reality lineup appears to be in flux. Early rumours hinted at a second-generation Vision Pro packed with advanced features, but that project seems to have been put on pause. Instead, Apple’s priority is now believed to be a more affordable version – something closer in price to a high-end iPhone. That model, however, isn’t expected until at least 2027, according to analyst Ming-Chi Kuo.

    More immediately, a smaller update to the current Vision Pro is being rumoured. Apple could swap in its upcoming M5 chip, providing a performance boost and possibly unlocking more advanced Apple Intelligence features, including an improved version of Siri. However, don’t expect big design changes. The refresh would likely reuse parts from the first-generation model to clear out leftover inventory.

    There’s also been talk of 5G connectivity, although that might be reserved for a proper Vision Pro 2 further down the road.

    Beyond AI: A content push and a smarter guest mode

    Alongside the AI upgrades, Apple is trying to tackle another criticism – the lack of content tailored to the Vision Pro. The upcoming update will introduce a spatial content app designed to showcase 3D images and panoramic photos sourced from external providers. Apple hopes this will give users more to explore and drive interest in spatial media, which has so far been slow to take off. Adding to the content push, an immersive arctic surfing video will drop on February 21 via the Vision Pro’s TV app – a small but notable effort to flesh out the media experience.

    On the usability front, guest mode is getting an upgrade. Apple is making it easier for Vision Pro owners to let friends and family try out the headset. For the first time, users will be able to set up guest access from their iPhone, selecting which apps are available. Previously, this all had to be done on the headset itself, which made lending it out a bit of a hassle.

    Siri and Apple’s AI growing pains

    While the update is bringing ChatGPT integration into Writing Tools, fans hoping for a smarter Siri on Vision Pro might be disappointed for now. Apple had planned a major Siri overhaul alongside this update, but engineering setbacks have reportedly pushed the release to May.

    That delay is part of a broader struggle for Apple Intelligence. Critics have noted that Apple’s AI rollout has felt rushed, with Writing Tools, for instance, described as clunky and poorly integrated into Apple’s usual text tools. By contrast, the Image Playground app has been praised for offering a more user-friendly approach to AI-generated content – the kind of experience people expect from Apple.

    Apple’s AI ambitions are still a work in progress. The company is seen as playing catch-up to rivals like OpenAI, Google, and Meta. While Apple Intelligence has started rolling out, key regions like continental Europe and China are still waiting, raising concerns about the company’s ability to keep pace in the fast-moving AI race.

    The long view is that despite the growing pains, Apple isn’t giving up on mixed reality or AI. The visionOS 2.4 update is a step toward keeping the Vision Pro relevant, even as the company works out the future of the product line. Whether it’s the rumoured M5 refresh, the eventual low-cost model, or something else entirely, Apple is clearly playing the long game. For now, though, Vision Pro remains a product for the few – those willing to pay top dollar for a glimpse into Apple’s vision of the future.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.

    The post Can AI and spatial content give the Apple Vision Pro a second wind? appeared first on TechWire Asia.

    ]]>
    OpenAI updates AI behaviour guidelines while preparing for GPT-5 https://techwireasia.com/2025/02/openai-updates-ai-behaviour-guidelines-while-preparing-for-gpt-5/ Fri, 14 Feb 2025 04:40:54 +0000 https://techwireasia.com/?p=239838 OpenAI has revamped its Model Spec. OpenAI is also laying the groundwork for GPT-5. OpenAI is shaking things up with a big update to its Model Spec—the rulebook that guides how its AI models should behave. And this time, they’re not keeping it to themselves. The 63-page document is now free for anyone to use, […]

    The post OpenAI updates AI behaviour guidelines while preparing for GPT-5 appeared first on TechWire Asia.

    ]]>
  • OpenAI has revamped its Model Spec.
  • OpenAI is also laying the groundwork for GPT-5.
  • OpenAI is shaking things up with a big update to its Model Spec—the rulebook that guides how its AI models should behave. And this time, they’re not keeping it to themselves. The 63-page document is now free for anyone to use, adapt, or build on, giving the broader AI community a peek into how OpenAI thinks about AI behaviour.

    This new version is a major expansion from the previous 10-page spec, covering everything from handling controversial topics to giving users more control over how AI interacts with them. The goal? To make AI models more flexible, transparent, and better equipped to let users explore ideas freely—without hitting arbitrary walls.

    Why now?

    The timing is no coincidence. OpenAI’s CEO, Sam Altman, recently hinted that GPT-4.5 (internally codenamed Orion) is on the way, with GPT-5 not far behind. As AI capabilities grow, so does the pressure to get the rules right. OpenAI’s behaviour team, led by Joanne Jang, says they wanted to get ahead of the curve and address tricky ethical questions that have sparked debates over the past year.

    One example? That infamous question about misgendering Caitlyn Jenner to stop a nuclear disaster—a bizarre but revealing prompt that highlighted how hard it can be to program AI to navigate ethics. OpenAI says it’s been rethinking how models should approach these kinds of moral dilemmas.

    “We can’t create one model with the exact same set of behaviour standards that everyone in the world will love,” according to Jang in an interview with The Verge. She highlighted that while OpenAI keeps key safety measures in place, users and developers still have flexibility to adjust many aspects of the model’s behaviour.

    More flexibility, not fewer safeguards

    The updated Model Spec reflects that balance. Users and developers will have more options to tweak AI behaviour—whether that means making the model more formal, casual, or tailored to their specific needs. But OpenAI is clear that some limits will stay. The model won’t encourage self-harm, create deepfakes, or churn out copyrighted content (especially with The New York Times suing OpenAI over content scraping).

    Another notable shift is how the company handles controversial topics. Instead of dodging tough questions, the spec encourages models to “seek the truth together” with users—offering thoughtful, reasoned answers while standing firm against misinformation or harmful content.

    Mature content and “grown-up mode”

    There’s also a more nuanced approach to adult content. After feedback from users asking for a “grown-up mode (something Altman publicly supported last year), OpenAI is exploring ways to allow certain mature content—like erotica—in appropriate contexts, while keeping strict bans on things like revenge porn or exploitative material.

    This is a departure from OpenAI’s previous blanket ban on anything explicit, and it signals that the company is trying to balance creativity and safety.

    Fixing “AI sycophancy”

    Another area getting attention is AI sycophancy—a fancy way of saying that AI models sometimes agree too easily, even when they shouldn’t. OpenAI wants ChatGPT to feel more like a thoughtful colleague, not a people-pleaser.

    That means it should correct users when they’re wrong, offer honest feedback instead of empty praise, and give consistent answers no matter how a question is phrased. The goal is to make interactions more reliable, so users don’t have to game the system to get accurate information. “We don’t ever want users to feel like they have to somehow carefully engineer their prompt to not get the model to just agree with you, Jang explained.

    Who’s in charge?

    OpenAI also clarified the chain of command for AI instructions. Platform-level rules come first, followed by developer guidelines, and then individual user preferences. This hierarchy is meant to clear up confusion over what can and can’t be customised when using OpenAI’s models.

    Open to the public

    Crucially, OpenAI is releasing the entire Model Spec under a Creative Commons Zero (CC0) license, meaning other companies can adopt or adapt it however they like. The company hopes this transparency will spark more industry-wide conversations about AI behaviour—and get feedback from the public.

    “We knew that it would be spicy, Jang admitted. “But I think we respect the public’s ability to actually digest these spicy things and process it with us.”

    The company is also open-sourcing the prompts it uses to test whether models are following the guidelines.

    What’s next?

    While the updated spec doesn’t immediately change how ChatGPT works, it signals where OpenAI is headed—especially with GPT-4.5 and GPT-5 in the pipeline. Altman recently shared on X that GPT-4.5 will be the “last non-chain-of-thought model, hinting that GPT-5 will be more capable of reasoning through complex problems.

    Sam Altman revealed the OpenAI roadmap update for GPT-4.5 and GPT-5 in X.
    Sam Altman revealed the OpenAI roadmap update for GPT-4.5 and GPT-5 in X. (Source – X)

    He also promised a simpler product experience, acknowledging that users are frustrated with having to pick between different models. “We hate the model picker as much as you do, Altman wrote, suggesting OpenAI is moving toward a more unified system.

    As all this unfolds, the broader AI industry will be watching closely. With Elon Musk trying (and failing) to buy OpenAI’s nonprofit arm for nearly $100 billion, and legal battles mounting, the pressure is on. But for now, OpenAI is betting that openness and user input will help shape the future of AI behaviour—for the better.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.

    The post OpenAI updates AI behaviour guidelines while preparing for GPT-5 appeared first on TechWire Asia.

    ]]>
    TikTok’s 13-hour ban: Trump’s unexpected rescue plan https://techwireasia.com/2025/01/tiktoks-13-hour-ban-trumps-unexpected-rescue-plan/ Mon, 20 Jan 2025 12:15:12 +0000 https://techwireasia.com/?p=239706 TikTok US ban reversal hinges on Trump’s proposed 50-50 joint venture. Supreme Court decision and Congressional opposition create uncertainty. The dramatic 13-hour shutdown of TikTok in the US on January 19, 2025, followed by its swift restoration, has created a complex tapestry of political manoeuvring, technological implications, and national security debates that continue to shape […]

    The post TikTok’s 13-hour ban: Trump’s unexpected rescue plan appeared first on TechWire Asia.

    ]]>
  • TikTok US ban reversal hinges on Trump’s proposed 50-50 joint venture.
  • Supreme Court decision and Congressional opposition create uncertainty.
  • The dramatic 13-hour shutdown of TikTok in the US on January 19, 2025, followed by its swift restoration, has created a complex tapestry of political manoeuvring, technological implications, and national security debates that continue to shape the platform’s uncertain future in America.

    The TikTok US ban reversal materialised through an unexpected champion: President-elect Donald Trump. His pledge to issue an executive order following his inauguration prompted the platform to restore service even before the formal order was signed. The development marks a striking evolution in Trump’s stance on the platform. He has transitioned from a vocal advocate for its ban during his first term to emerging as its potential saviour.

    ByteDance’s challenge and Trump’s proposed solution

    At the heart of the controversy lies ByteDance’s consistent reluctance to sell TikTok, notably its prized recommendation algorithm. Trump’s proposed solution – a 50-50 joint venture between ByteDance and American owners – represents a potential middle ground, although its feasibility remains questionable.

    The proposal signals a significant shift in approach, attempting to balance national security concerns with the platform’s operational continuity. Multiple factors complicate the path forward. The existing law, signed by outgoing President Biden in April 2024, mandated ByteDance to sell TikTok to an owner from the US or its allies in 270 days. Trump’s executive order, while providing temporary relief, cannot unilaterally override this congressional mandate.

    The legal reality is further emphasised by opposition from prominent Republican Senators Tom Cotton and Pete Ricketts, who argue against any extension of the ban’s effective date. The situation has attracted several potential buyers. A group led by billionaire Frank McCourt and Kevin O’Leary has submitted a formal bid, as has the AI search engine PerplexityAI. Reports have also suggested possible interest from Elon Musk, though he has maintained public ambiguity about any potential acquisition.

    Musk’s Sunday statement opposing the TikTok ban on free speech grounds and criticising the imbalance between TikTok’s operation in America and X’s inability to operate in China adds another layer to the complex narrative.

    Technical and operational challenges

    The prospect of splitting TikTok’s US operations presents significant technical challenges. McCourt’s group has proposed purchasing TikTok’s US assets without the company’s algorithmic software. Historical attempts by tech giants like Meta and YouTube to replicate TikTok’s engagement mechanics have shown the difficulty of this approach. Creating an American-only version of TikTok could necessitate a new app for global users to access US content, adding further complexity to the platform’s operation.

    What is understood so far is that the brief shutdown highlighted the important role of TikTok’s service providers. Despite the Biden administration’s apparent willingness to defer enforcement to the incoming Trump administration, service providers’ concerns about potential penalties – up to $5,000 per person with access to TikTok – led to the temporary cessation of services. Trump’s promise of liability protection for these providers proved important in restoring service.

    Looking ahead: uncertain future

    The resolution of TikTok’s US situation could unfold in several ways. The most likely scenario is a reprieve through Trump’s executive order, followed by intense negotiations over the proposed 50-50 joint venture structure. However, ByteDance’s historical resistance to selling, algorithmic complexities, and valuation challenges could complicate this path.

    Alternative outcomes include a complete sale to American buyers, although this faces significant hurdles regarding algorithm ownership and operational continuity. The least likely but still possible scenario is a new legislative solution, but this would require substantial bipartisan support in a Congress that strongly backed the original ban.

    Other nations grappling with similar concerns will watch the TikTok US ban reversal saga. India, which banned TikTok in 2020, maintains its firm stance, while European Union regulators continue to scrutinise the platform under their Digital Services Act.

    The UK, Australia, and Canada also monitor the US situation as they consider their approaches to Chinese-owned technology platforms. In short, the eventual US resolution could serve as a template for other nations. If Trump’s joint venture model succeeds, it might offer a middle-ground solution for countries seeking to balance national security concerns with the platform’s popularity. Conversely, if the ban takes effect, it could embolden other nations to act similarly.

    The situation transcends TikTok, potentially reshaping how nations approach technology platforms owned by geopolitical competitors. The outcome could establish precedents for handling foreign-owned apps, data sovereignty, and the balance between national security and digital innovation.

    The post TikTok’s 13-hour ban: Trump’s unexpected rescue plan appeared first on TechWire Asia.

    ]]>