Zuckerberg Vows Major 2026 AI Push, Focused on Commerce with New “Agentic” Tools

All articles and pictures are from the Internet. If there are any copyright issues, please contact us in time to delete.
Inquiry us
.site-title, .site-description { position: absolute; clip: rect(1px, 1px, 1px, 1px); }

All articles and pictures are from the Internet. If there are any copyright issues, please contact us in time to delete.
Inquiry us
Google recently upgraded its AI search experience, now allowing users to directly ask follow-up questions from the “AI Overview” on the search results page and seamlessly switch to “AI Mode” for multi-turn, in-depth conversations.
(Google Logo)
At the same time, the default model for AI Overviews worldwide has been upgraded to the more powerful Gemini 3.0.
This update aims to distinguish between simple queries and complex exploratory scenarios. Users can not only quickly obtain instant information such as scores and weather but also engage in natural conversations to delve deeply into various topics.

Google stated that testing has confirmed that follow-up questions that preserve context significantly enhance the practicality of search, and the new design enables users to smoothly transition from brief summaries to deeper conversations.
This update connects with the recently launched “Personal Intelligence” feature, which leverages users’ personal data—such as Gmail and Photos—to enable the AI to provide personalized responses. These series of initiatives collectively drive Google Search’s ongoing evolution from a traditional list of results toward a dynamic, interactive intelligent assistant.
Roger Luo said:This update marks a pivotal shift of search engines from information retrieval to conversational cognitive partners. By lowering interaction barriers, Google not only improves user experience but also strengthens its strategic position as a gateway in the competitive landscape of intelligent service ecosystems.
All articles and pictures are from the Internet. If there are any copyright issues, please contact us in time to delete.
Inquiry us
The plan covers 35 newly added countries and regions, having been gradually rolled out to dozens of markets since its initial launch in Indonesia last September.
The core features of the plan include access to the Gemini 3 Pro and Nano Pro models within the Gemini app, AI video creation through Veo, research and writing assistance via NotebookLM, 200GB of storage, and the ability to share benefits with up to five family members. Existing Google One Premium (2TB) users will be automatically upgraded to receive all these benefits in the coming days.
(GettyImages)

Positioned as the first upgrade option after the free tier, the plan primarily targets users who do not need or cannot afford the high-end Pro version priced at $20 per month. Its tiered pricing strategy (e.g., approximately $4.44 per month in India) directly competes with OpenAI’s ChatGPT Go plan. It aims to attract users in emerging markets and casual users with an accessible price point, fostering long-term usage habits and accelerating the adoption of AI technology and enterprise user penetration.
Roger Luo said:Google lowers the threshold for AI usage through a differentiated pricing strategy, filling the gap between the free and high-end markets with mid-range packages. This move not only directly benchmarks competitors, but also focuses on cultivating user habits in emerging markets, laying the foundation for long-term ecological layout.
All articles and pictures are from the Internet. If there are any copyright issues, please contact us in time to delete.
Inquiry us
A group of YouTube creators are suing multiple tech giants for illegally capturing their videos to train AI models, and Snap has recently been added to the list of defendants. These three plaintiffs, who collectively have approximately 6.2 million subscribers, accuse Snap of using its video content to train an AI system for in app AI features such as “Imagine Lens,” which allows users to edit images through text commands.
(evan spiegel)

Previously, the plaintiff had filed a lawsuit against Nvidia, Meta and ByteDance for similar reasons.
The latest proposed class action lawsuit was submitted to the United States District Court for the Central District of California last Friday. The plaintiff specifically pointed out that Snap used a large-scale video language dataset called HD-VILA-100M and other datasets limited to academic research purposes. The plaintiff claims that in order to use the dataset for commercial purposes, Snap circumvented YouTube’s technical restrictions, terms of service, and license provisions prohibiting commercial use.
The lawsuit demands statutory compensation and applies for a permanent injunction to prevent potential infringement in the future.
This case is mainly led by the creators of the h3h3 YouTube channel with a subscription volume of 5.52 million, as well as the smaller golf channels MrShortGame Golf and Golfholics.
This is the latest case among numerous content creators suing AI model suppliers. Previously, there have been copyright disputes from publishers, writers, newspapers, user generated content platforms, artists, and other parties. This is not the first lawsuit initiated by YouTube creators. According to data from the non-profit organization Copyright Alliance, there have been over 70 copyright infringement cases against AI companies.
The progress of such lawsuits varies: in the case of Meta and Writers Group, the judge ruled in favor of tech giants; In the case between Anthropic and the author group, the AI giant chose to settle with the plaintiff and pay compensation. Currently, the majority of cases are still under active trial.
Roger Luo said:This case centers on whether the commercial use of “research-only” datasets for AI training constitutes a substantive violation of both original content copyrights and platform terms of service. It touches on the universal legal challenge in the age of generative AI: defining the boundaries of data ownership and fair use in training materials.
All articles and pictures are from the Internet. If there are any copyright issues, please contact us in time to delete.
Inquiry us
X Tests launched a new AI tool named “AI Content Coach” today. This tool helps writers improve their drafts. It gives suggestions for better writing. The AI looks at written text. It finds areas needing work. It points out unclear sentences. It spots awkward phrases. It identifies weak arguments. Writers get specific feedback. They see where their writing could be stronger. The AI makes recommendations for changes. These recommendations aim to make the writing clearer. They also aim to make it more engaging. Writers can choose to accept the suggestions. They can also ignore them. The tool is meant to assist human writers. It is not meant to replace them. X Tests wants to support content creators. The company wants to save writers time. The goal is better quality writing overall. The AI Content Coach is in testing now. A select group of users is trying it. These users provide feedback to X Tests. The company uses this feedback to improve the tool. X Tests plans to launch it more widely later this year. Pricing details are not available yet. The company is still finalizing the product. X Tests is known for its AI research. This new tool builds on that work. It applies AI directly to the writing process. Writers can focus more on ideas. They spend less time fixing wording. The AI handles some editing tasks. It speeds up the revision process. Early testers report positive experiences. They say the suggestions are helpful. They find the tool easy to use. It integrates smoothly into their workflow. X Tests continues to refine the AI. It wants the suggestions to be truly useful. The company sees big potential for this technology. It believes many writers will benefit.
(X Tests “AI Content Coach” That Suggests Improvements to Drafts)
TikTok’s AI Moderation Under Fire for Biased Content Removal
(TikTok’s “AI Moderation” Faces Scrutiny Over Alleged Bias in Content Removal)
TikTok faces growing criticism. Critics say its AI content moderation tools unfairly target posts from minority groups. This follows user reports and several studies. These reports suggest the automated systems remove videos more often if the creators are Black, LGBTQ+, or belong to other marginalized communities. People complain their content gets taken down without clear reasons. They say TikTok does not explain the removals properly.
The company uses AI heavily to manage its massive video library. TikTok relies on algorithms to flag and remove content that breaks its rules. These rules cover things like hate speech, harassment, and harmful misinformation. But the algorithms seem prone to mistakes. Critics argue the AI struggles to understand context. This leads to videos being wrongly flagged. Videos discussing racism or LGBTQ+ issues seem especially vulnerable. TikTok denies its systems are biased. A spokesperson stated the company works constantly to improve its AI tools. They aim for fairness and accuracy. The spokesperson said TikTok is always refining its processes. They pointed to ongoing human review efforts for complex cases.
(TikTok’s “AI Moderation” Faces Scrutiny Over Alleged Bias in Content Removal)
Experts worry about relying too much on AI. They note AI systems learn from past data. This data can reflect existing human biases. If the training data contains bias, the AI might copy it. This could result in unfair decisions. Calls for more transparency are increasing. Users and researchers want TikTok to explain how its moderation AI works. They want clearer appeals processes for creators whose content gets removed. Lawmakers are also paying attention. Some US politicians are questioning TikTok about its moderation practices. They want to understand potential impacts on free expression. This scrutiny adds pressure on TikTok. The company must address these bias concerns quickly. Trust in its platform depends on fair treatment for all users.
The US government recently officially approved Nvidia and AMD to export high-performance AI chips to some Chinese customers, including the Nvidia H200 series. This policy shift occurred after the authorities re evaluated the ban on Chinese chips, which has attracted high attention from the industry.
(Benjamin Girette)
At the World Economic Forum in Davos, Dario Amodai, CEO of the artificial intelligence company Anthropic, strongly criticized this, likening the chip export policy to “selling nuclear weapons to North Korea”. It is worth noting that Anthropic is not only an important technology partner of NVIDIA, but also a strategic investment target that the latter has promised to invest billions of dollars in. Amodai warns that the United States’ leading advantage in chip manufacturing may be weakened by these exports.

We have been leading China in chip manufacturing capabilities for many years, and exporting these high-performance AI chips would be a strategic mistake. ”Amodai stated on the forum site. He further emphasized that artificial intelligence technology has profound national security implications, and in the future, AI systems may become the “genius kingdom in data centers”.
This round of controversy highlights the emerging technological competition in the field of artificial intelligence. Although business cooperation and investment relationships still exist, industry leaders’ positions on national security and technological leadership issues have become increasingly clear. Analysts point out that this reflects that in the context of the intensifying global AI competition, corporate decision-making is gradually moving beyond traditional business considerations and shifting towards a more macro strategic security dimension.
Roger Luo said:This controversy highlights the profound contradiction in the global AI competition: while companies pursue commercial interests and technological leadership, they have to face security challenges brought about by technological diffusion.
All articles and pictures are from the Internet. If there are any copyright issues, please contact us in time to delete.
Inquiry us
Elon Musk recently announced that Tesla plans to restart its previously stalled third-generation AI chip project, Dojo3. Unlike before, the goal of this chip will no longer be focused on training ground autonomous driving models, but will shift towards the field of “space AI computing”.
(Tesla’s phone)

This move comes just five months after Tesla suspended the Dojo project. Previously, after the departure of project leader Peter Bannon, Tesla disbanded the team responsible for the Dojo supercomputer. About 20 former team members subsequently joined DensityAI, an emerging AI infrastructure company co founded by former Dojo leader Gannis Venkataraman and former Tesla employees Bill Zhang and Ben Florin.
When the Dojo project was suspended, there were reports that Tesla planned to reduce its investment in self-developed chips and instead increase its reliance on computing resources from partners such as Nvidia and AMD, and chose Samsung to be responsible for chip manufacturing. Musk’s latest statement indicates that the company’s strategy may be adjusted again.
The AI5 chip currently used by Tesla is produced by TSMC and is mainly used to support autonomous driving functions and Optimus humanoid robots. Last summer, Tesla signed a $16.5 billion agreement with Samsung to produce the next generation AI6 chip, which will serve high-performance AI training in Tesla vehicles, Optimus robots, and data centers.
AI7/Dojo3 will focus on space AI computing, “Musk said on Sunday, meaning that the restarted project will be given a more cutting-edge positioning. To achieve this goal, Tesla is working on rebuilding the team that disbanded several months ago. Musk directly issued a talent recruitment invitation on the same occasion: “If you are interested in participating in the construction of the world’s most widely used chip, please feel free to send an email to AI_Chips@Tesla.com That’s right.
Roger Luo stated:Tesla’s restart of the Dojo3 towards space computing demonstrates its continuous exploration and rapid adjustment capabilities in AI chip strategy. This is not only a significant shift in its technological roadmap, but also reflects its early layout for future high frontier AI computing scenarios.
All articles and pictures are from the Internet. If there are any copyright issues, please contact us in time to delete.
Inquiry us
Google Announces New AI Ethics Board Amid Rising Tech Narratives Debate. The tech giant aims to address growing public discussions about technology’s future impacts. Many news stories now describe tech futures as either perfect or terrible. Google says this oversimplifies complex issues. The company wants more balanced conversations.
(Google and Utopian/Dystopian Narratives)
Google’s latest Gemini AI tool sparked intense reactions. Supporters call it a step toward helpful AI assistants for everyone. Critics fear such tools spread misinformation or cause job losses. Google insists it focuses on responsible development. The company points to strict safety testing before any release.
Recent press coverage often uses extreme language. Headlines predict either total societal transformation or complete collapse. Google argues reality sits between these extremes. The company notes AI already helps doctors and scientists daily. It also admits challenges like bias in algorithms need constant work.
Tech leaders face pressure about AI’s direction. Some people worry about privacy and automated decisions. Others see huge potential for solving climate or health problems. Google acknowledges both viewpoints exist. The company formed the new board to gather diverse expert opinions. This group includes ethicists, researchers, and policy specialists.
(Google and Utopian/Dystopian Narratives)
Google states its goal remains developing useful technology. The company believes ethical guidelines prevent harm. It also emphasizes needing realistic public expectations. Past projects like DeepMind show AI tackling tough problems like protein folding. Setbacks occur too, requiring careful fixes. Google commits to ongoing improvements and transparency. Public trust remains essential for future progress.
Samsung launches new AI interview simulation tool. This tool helps job seekers practice for real interviews. It is available inside Samsung’s Global Goals app. Many people find job interviews stressful. Samsung wants to make this easier. The AI simulation mimics a real job interview experience.
(Samsung launched the “AI interview simulation” function, job hunting Easier)
Users can practice answering common interview questions. The AI asks questions just like a human interviewer might. Users speak their answers out loud. The AI listens and analyzes the responses. It provides feedback on how the user performed. This feedback includes pace and clarity. It also suggests areas for improvement. Users see their practice results immediately.
This tool aims to build confidence. Practicing with AI reduces interview anxiety. Job seekers feel more prepared. They can practice anytime, anywhere. No scheduling is needed. Just open the app and start a session. Samsung believes technology should empower people. This feature supports career development. It aligns with Samsung’s focus on practical AI.
The AI interview simulator uses advanced language processing. It understands natural speech patterns. The technology focuses on helpful feedback. It does not judge personality traits. Samsung emphasizes responsible AI use. Privacy is a priority. User interview practice data stays secure. Samsung does not share this data.
(Samsung launched the “AI interview simulation” function, job hunting Easier)
The Global Goals app is free to download. The new interview feature is available now. Samsung hopes this tool benefits many job seekers globally. It is part of their broader AI for All vision. They plan to add more features over time. The goal is simple: make job hunting less intimidating. Samsung continues exploring useful AI applications for daily life. This new tool reflects that commitment.