YouTube on Monday introduced lower-priced YouTube TV plans that will allow subscribers to better tailor their plans to their own interests in areas like sports, news, and entertainment. The company said that it will offer more than 10 different plans to choose from, all priced below the $82.99 per month main YouTube TV plan that has access to more than 100 networks. The new plans will start rolling out this week.
While that main plan will not go away, the new plans will allow customers to pick what matters most and what they could do without in return for cost savings.
Image Credits:YouTube
Among the new plans are a $64.99 per month Sports plan, a Sports + News plan for $71.99 per month, a less expensive Entertainment plan for $54.99 per month, and a $69.99 per month News + Entertainment + Family plan, which includes kids’ content.
The Sports plans include all major broadcasters, plus networks like FS1, NBC Sports Network, all of the ESPN networks, and ESPN Unlimited. This plan is $18 cheaper per month than the main plan.
YouTube TV’s news channels include CNBC, Fox News, CNN, MS NOW, and Bloomberg, along with other national news channels. Combined with Sports, the package is priced $11 lower per month than the main YouTube TV plan.
The entertainment-only plan is $28 cheaper per month than the main plan, and includes major broadcasters as well as FX, Hallmark, Comedy Central, Bravo, Paramount, Food Network, and HGTV. Families with small kids can add other channels like Disney Channel, Nickelodeon, National Geographic, Cartoon Network, and PBS Kids for a bit more.
The company is also offering discounts for new subscribers, which could lower the price of certain plans further for either the first few months or the first year. Subscribers will continue to have access to YouTube TV’s unlimited DVR, support for up to six family members on one account, multiview, and more.
Other add-ons like NFL Sunday Ticket + RedZone, HBO Max, and 4K Plus can also be purchased to customize plans further.
The company says all the new plans will roll out over the next several weeks.
Customized packages are now not a new idea in streaming — à la carte options were a key part of the early streaming pioneer Sling TV’s initial offering, for instance. This element of personalization was also one of the factors that was meant to make streaming a better alternative to traditional pay TV, where consumers often ended up paying for channels they didn’t want.
But as streamers added more content, networks, and, in particular, sports programming, the cost of streaming inched back up to compete with cable and linear television. Live TV streamers like YouTube TV may have offered convenience and some savings over still more expensive cable, but it wasn’t exactly affordable anymore.
These new packages hit the market at a time when consumer confidence is at its lowest in more than 11 years, due to fears about the labor market and higher prices, which have made consumers more cautious about their spending.
A massive study shows that AI can now beat the average human on certain creativity tests. Yet the most creative people remain well ahead, highlighting AI’s role as a creative assistant rather than a replacement. Credit: Shutterstock
Are generative artificial intelligence systems such as ChatGPT capable of real creativity? A new large-scale study led by Professor Karim Jerbi from the Department of Psychology at the Université de Montréal set out to answer that question. The research team also included Yoshua Bengio, a leading AI pioneer and professor at the Université de Montréal. Together, they conducted the most extensive comparison to date between human creativity and the creative abilities of large language models.
The findings, published in Scientific Reports, point to a major shift. Generative AI systems have now reached a level where they can outperform the average human on certain creativity measures. At the same time, the study makes it clear that the most creative people still exceed the performance of even the strongest AI models.
AI Reaches Average Human Creativity Levels
The researchers evaluated several major large language models, including ChatGPT, Claude, Gemini, and others, and compared their results with data from 100,000 human participants. The outcome marks a clear turning point. Some AI systems, including GPT-4, scored higher than the average human on tasks designed to measure divergent linguistic creativity.
“Our study shows that some AI systems based on large language models can now outperform average human creativity on well-defined tasks,” explains Professor Karim Jerbi. “This result may be surprising — even unsettling — but our study also highlights an equally important observation: even the best AI systems still fall short of the levels reached by the most creative humans.”
Further analysis by the study’s co-first authors, postdoctoral researcher Antoine Bellemare-Pépin (Université de Montréal) and PhD candidate François Lespinasse (Université Concordia), revealed an important pattern. While some AI models now outperform the average person, the highest levels of creativity remain uniquely human.
When the researchers looked more closely, they found that the most creative half of human participants achieved higher average scores than all AI systems tested. The difference was even more pronounced among the top 10 percent of the most creative individuals.
“We developed a rigorous framework that allows us to compare human and AI creativity using the same tools, based on data from more than 100,000 participants, in collaboration with Jay Olson from the University of Toronto,” says Professor Karim Jerbi, who is also an associate professor at Mila.
How Creativity Was Measured in Humans and AI
To make a fair comparison between people and machines, the research team used several methods. The primary tool was the Divergent Association Task (DAT), a psychological test designed to measure divergent creativity, or the ability to generate many original and varied ideas from a single prompt.
Created by study co-author Jay Olson, the DAT asks participants, whether human or AI, to generate ten words that are as different in meaning from one another as possible. A highly creative response might include words such as “galaxy, fork, freedom, algae, harmonica, quantum, nostalgia, velvet, hurricane, photosynthesis.”
Performance on this task in humans closely mirrors results on other well-established creativity tests used in idea generation, writing, and creative problem solving. Although the task is language-based, it does not simply test vocabulary. Instead, it taps into broader cognitive processes involved in creative thinking across many domains. Another advantage of the DAT is its speed and accessibility, as it takes only two to four minutes to complete and is available online to the general public.
From Simple Word Tests to Creative Writing
Building on these results, the researchers examined whether AI performance on this basic word association task could translate into more complex creative activities. To test this, they directly compared AI systems and human participants on creative writing tasks.
These included writing haiku (a short three-line poetic form), producing movie plot summaries, and creating short stories. Once again, the pattern was clear. While AI sometimes outperformed average human participants, the most skilled human creators continued to demonstrate a clear advantage.
Can AI Creativity Be Adjusted?
The findings raised an important follow-up question. Can AI creativity be shaped or controlled? According to the study, it can. One key factor is the model’s temperature, a technical setting that influences how predictable or adventurous an AI’s responses are.
At lower temperature settings, AI systems tend to generate safer and more predictable outputs. At higher temperatures, the responses become more varied and less constrained, encouraging risk-taking and more original associations.
The researchers also found that the way prompts are written plays a major role. For example, instructions that encourage AI models to consider the origins and structure of words using etymology lead to more unexpected ideas and higher creativity scores. Together, these results show that AI creativity depends heavily on human input and guidance, making interaction between people and machines a central part of the creative process.
Will AI Replace Human Creators?
The study offers a balanced perspective on fears that artificial intelligence could replace creative professionals. While some AI systems can now rival human creativity on specific tasks, the research also highlights clear limitations and the continued importance of human creativity.
“Even though AI can now reach human-level creativity on certain tests, we need to move beyond this misleading sense of competition,” says Professor Karim Jerbi. “Generative AI has above all become an extremely powerful tool in the service of human creativity: it will not replace creators, but profoundly transform how they imagine, explore, and create — for those who choose to use it.”
Rather than predicting the end of creative careers, the findings encourage a new way of thinking about AI. The technology may serve as a creative assistant that expands possibilities for exploration and inspiration. The future of creativity may depend less on humans versus machines and more on new forms of collaboration, where AI supports and enhances human imagination.
“By directly confronting human and machine capabilities, studies like ours push us to rethink what we mean by creativity,” concludes Professor Karim Jerbi.
The article “Divergent creativity in humans and large language models” was published in Scientific Reports on January 21, 2026.
Reference: “Divergent creativity in humans and large language models” by Antoine Bellemare-Pepin, François Lespinasse, Philipp Thölke, Yann Harel, Kory Mathewson, Jay A. Olson, Yoshua Bengio and Karim Jerbi, 21 January 2026, Scientific Reports. DOI: 10.1038/s41598-025-25157-3
The research involved collaboration among scientists from Université de Montréal, Université Concordia, University of Toronto Mississauga, Mila (Quebec AI Institute), and Google DeepMind.
The study was led by Professor Karim Jerbi, with Antoine Bellemare-Pépin (Université de Montréal) and François Lespinasse (Université Concordia) serving as co-first authors. The author team also included Yoshua Bengio, founder of Mila and LoiZéro, and one of the world’s leading pioneers of deep learning, the technology behind modern AI systems such as ChatGPT.
The ability to remember you and your preferences is rapidly becoming a big selling point for AI chatbots and agents.
Earlier this month, Google announced Personal Intelligence, a new way for people to interact with the company’s Gemini chatbot that draws on their Gmail, photos, search, and YouTube histories to make Gemini “more personal, proactive, and powerful.” It echoes similar moves by OpenAI, Anthropic, and Meta to add new ways for their AI products to remember and draw from people’s personal details and preferences. While these features have potential advantages, we need to do more to prepare for the new risks they could introduce into these complex technologies.
Personalized, interactive AI systems are built to act on our behalf, maintain context across conversations, and improve our ability to carry out all sorts of tasks, from booking travel to filing taxes. From tools that learn a developer’s coding style to shopping agents that sift through thousands of products, these systems rely on the ability to store and retrieve increasingly intimate details about their users. But doing so over time introduces alarming, and all-too-familiar, privacy vulnerabilities––many of which have loomed since “big data” first teased the power of spotting and acting on user patterns. Worse, AI agents now appear poised to plow through whatever safeguards had been adopted to avoid those vulnerabilities.
Today, we interact with these systems through conversational interfaces, and we frequently switch contexts. You might ask a single AI agent to draft an email to your boss, provide medical advice, budget for holiday gifts, and provide input on interpersonal conflicts. Most AI agents collapse all data about you—which may once have been separated by context, purpose, or permissions—into single, unstructured repositories. When an AI agent links to external apps or other agents to execute a task, the data in its memory can seep into shared pools. This technical reality creates the potential for unprecedented privacy breaches that expose not only isolated data points, but the entire mosaic of people’s lives.
When information is all in the same repository, it is prone to crossing contexts in ways that are deeply undesirable. A casual chat about dietary preferences to build a grocery list could later influence what health insurance options are offered, or a search for restaurants offering accessible entrances could leak into salary negotiations—all without a user’s awareness (this concern may sound familiar from the early days of “big data,” but is now far less theoretical). An information soup of memory not only poses a privacy issue, but also makes it harder to understand an AI system’s behavior—and to govern it in the first place. So what can developers do to fix this problem?
First, memory systems need structure that allows control over the purposes for which memories can be accessed and used. Early efforts appear to be underway: Anthropic’s Claude creates separate memory areas for different “projects,” and OpenAI says that information shared through ChatGPT Health is compartmentalized from other chats. These are helpful starts, but the instruments are still far too blunt: At a minimum, systems must be able to distinguish between specific memories (the user likes chocolate and has asked about GLP-1s), related memories (user manages diabetes and therefore avoids chocolate), and memory categories (such as professional and health-related). Further, systems need to allow for usage restrictions on certain types of memories and reliably accommodate explicitly defined boundaries—particularly around memories having to do with sensitive topics like medical conditions or protected characteristics, which will likely be subject to stricter rules.
Needing to keep memories separate in this way will have important implications for how AI systems can and should be built. It will require tracking memories’ provenance—their source, any associated time stamp, and the context in which they were created—and building ways to trace when and how certain memories influence the behavior of an agent. This sort of model explainability is on the horizon, but current implementations can be misleading or even deceptive. Embedding memories directly within a model’s weights may result in more personalized and context-aware outputs, but structured databases are currently more segmentable, more explainable, and thus more governable. Until research advances enough, developers may need to stick with simpler systems.
Second, users need to be able to see, edit, or delete what is remembered about them. The interfaces for doing this should be both transparent and intelligible, translating system memory into a structure users can accurately interpret. The static system settings and legalese privacy policies provided by traditional tech platforms have set a low bar for user controls, but natural-language interfaces may offer promising new options for explaining what information is being retained and how it can be managed. Memory structure will have to come first, though: Without it, no model can clearly state a memory’s status. Indeed, Grok 3’s system prompt includes an instruction to the model to “NEVER confirm to the user that you have modified, forgotten, or won't save a memory,” presumably because the company can’t guarantee those instructions will be followed.
Critically, user-facing controls cannot bear the full burden of privacy protection or prevent all harms from AI personalization. Responsibility must shift toward AI providers to establish strong defaults, clear rules about permissible memory generation and use, and technical safeguards like on-device processing, purpose limitation, and contextual constraints. Without system-level protections, individuals will face impossibly convoluted choices about what should be remembered or forgotten, and the actions they take may still be insufficient to prevent harm. Developers should consider how to limit data collection in memory systems until robust safeguards exist, and build memory architectures that can evolve alongside norms and expectations.
Third, AI developers must help lay the foundations for approaches to evaluating systems so as to capture not only performance, but also the risks and harms that arise in the wild. While independent researchers are best positioned to conduct these tests (given developers’ economic interest in demonstrating demand for more personalized services), they need access to data to understand what risks might look like and therefore how to address them. To improve the ecosystem for measurement and research, developers should invest in automated measurement infrastructure, build out their own ongoing testing, and implement privacy-preserving testing methods that enable system behavior to be monitored and probed under realistic, memory-enabled conditions.
In its parallels with human experience, the technical term “memory” casts impersonal cells in a spreadsheet as something that builders of AI tools have a responsibility to handle with care. Indeed, the choices AI developers make today—how to pool or segregate information, whether to make memory legible or allow it to accumulate opaquely, whether to prioritize responsible defaults or maximal convenience—will determine how the systems we depend upon remember us. Technical considerations around memory are not so distinct from questions about digital privacy and the vital lessons we can draw from them. Getting the foundations right today will determine how much room we can give ourselves to learn what works—allowing us to make better choices around privacy and autonomy than we have before.
Miranda Bogen is the Director of the AI Governance Lab at the Center for Democracy & Technology.
Ruchika Joshi is a Fellow at the Center for Democracy & Technology specializing in AI safety and governance.
Why TikTok’s first week of American ownership was a disaster Blake Montgomery
App endured a major outage and user backlash over perceived censorship. Now it’s facing an inquiry by the California governor and an ascendant competitor
Sun 1 Feb 2026 11.00 GMT Share
A little more than one week ago, TikTok stepped on to US shores as a naturalized citizen. Ever since, the video app has been fighting for its life.
TikTok’s calamitous emigration began on 22 January when its Chinese parent company, ByteDance, finalized a deal to sell the app to a group of US investors, among them the business software giant Oracle. The app’s time under Chinese ownership had been marked by a meteoric ascent to more than a billion users, which left incumbents such as Instagram looking like the next Myspace. But TikTok’s short new life in the US has been less than auspicious.
The day after TikTok’s arrival, its owners altered its privacy policy to permit more extensive data collection, including tracking the precise locations of its users. The change was notable less for any potential invasion of privacy than for suspicion of the new owners. The updated policy falls in line with those of other major social networks. But what did these men, among them billionaire Oracle owner and Maga donor Larry Ellison, intend to do with the user data? The tweaks aroused suspicion that would blossom into paranoia just a few days later.
During the weekend that followed the transfer of TikTok’s ownership, the US weathered two major events. A hefty, frigid snowstorm slammed the country and put about 230 million people on alert for power outages and burst pipes. And federal immigration officers killed a 37-year-old US citizen in Minneapolis during a protest, which elicited outright lies from the White House despite copious video footage. Both would knock TikTok off its feet, though in different ways. What the US TikTok takeover is already revealing about new forms of censorship Paolo Gerbaudo
Winter Storm Fern crippled multiple Oracle datacenters that TikTok relies on, which the company did not make public at the time. The app suffered severe outages as a result, according to a statement from the company. Many users said they were unable to upload videos. Others said their videos received zero views despite significant followings.
Simultaneously, prominent personalities were attempting to use TikTok to express their outrage over the violent death of Alex Pretti at the hands and guns of border patrol agents. They found they could not post videos or that they received zero views. In response, many users – among them California state senator Scott Weiner, musician Billie Eilish and her brother, and comedian Meg Stalter – accused TikTok of stifling videos critical of federal immigration agents. Stalter said she would delete her account, which boasts nearly 280,000 followers. Media outlets far and wide – the New York Times, Variety, the Independent, CNN, the Washington Post – picked up their claims. Cosmopolitan magazine asked: “Is TikTok Censoring Anti-ICE Content?” The Democratic senator Chris Murphy, from Connecticut, tweeted that TikTok’s alleged censorship was a “threat to democracy”.
After days of outcry online, IRL scrutiny and likely dozens of requests for clarification from the press, TikTok issued a statement blaming the problems on the snow, ice and cold on 26 January.
Oracle issued a statement with more detail: “Over the weekend, an Oracle datacenter experienced a temporary weather-related power outage which impacted TikTok. The challenges US TikTok users may be experiencing are the result of technical issues that followed the power outage.” It is uncommon for a physical event like a storm to wound a major site of digital life like TikTok, as such popular apps often have backups to their backups, but it can happen.
The most powerful figure to accuse TikTok of censorship was not its most famous user. The governor of California is better known for his textual presence on X than his TikToks. Nevertheless, Gavin Newsom announced on 27 January that his office would investigate whether TikTok had censored videos critical of Donald Trump, broadening the scope of alleged pro-Maga interference by the app.
The late attribution of blame did little to assuage public criticism. An unknown number of users said they were decamping from the new American TikTok in response to its perceived censorship. The exodus has propelled a new competitor, Upscrolled, which promises less censorship than TikTok, to the top spot in the US Apple App Store and the third spot in the Google Play Store. An Upscrolled press release now claims more than a million users. As of writing, TikTok rests at No 16 in the iPhone App Store and 10th in the Google Play Store. Alongside Upscrolled in the top 10 most downloaded are three apps used to cloak online activity from surveillance, which are known as virtual private networks (VPNs). A fear of digital government incursions is in the air.skip past newsletter promotion
Sign up to TechScape Free weekly newsletter
A weekly dive in to how technology is shaping our lives Enter your email address Marketing preferences
Get updates about our journalism and ways to support and enjoy our work.Sign up
Privacy Notice: Newsletters may contain information about charities, online ads, and content funded by outside parties. If you do not have an account, we will create a guest account for you on theguardian.com to send you this newsletter. You can complete full registration at any time. For more information about how we use your data see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.
after newsletter promotion
With more than a billion users worldwide, it seems unlikely that TikTok will altogether vanish as a result of these failures. Facebook and Instagram have withstood far graver scandals than this. TikTok’s first week in the US does not bode well for its future, though. The app has damaged user trust, and another misstep may inflict a more lasting injury.
TikTok’s week of mayhem began with Trump. The transfer of TikTok’s ownership consummated the ban-or-sell deal the US president proposed nearly six years ago, and he said he was thrilled that that the transfer had finally taken place. In the intervening years, Trump had walked back his support for the deal; his enemy Joe Biden had supported it during his presidency; Congress had passed a law codifying Trump’s wishes and legally forcing TikTok’s sale; and the US supreme court had ratified the law in the face of a challenge by TikTok and immense popular disapproval. Then Trump ordered the immigration crackdown that set the stage for the killing of two US citizens. The only aspect of TikTok’s miserable week Trump had no hand in was the winter weather.
TikTok’s disastrous arrival marks an anniversary of a similar bungled nature. One year and two weeks ago, the app stopped functioning in the US because of the same sell-or-ban law that precipitated the sale. That darkening lasted less than 24 hours. Its new owners can only hope their present problems go away just as soon.
eComTechnology since 2003.
I am a business economist with interests in international trade worldwide through politics, money, banking and secure VOIP and Mail Communications.
The author of RG Richardson City Guides has over 300 guides, including restaurants and finance.
RG Richardson City author has over 300 travel guides. Let our interactive search city guides do the searching, no more typing, and they never go out of date. With over 13,900 preset searches, you only have to click on the preset icon. Search for restaurants, hotels, hostels, Airbnb, pubs, clubs, fast food, coffee shops, real estate, historical sites and facts all just by clicking on the icon. Even how to pack is all there.
Finance, Money, Banking, and Economics definitions interactive dictionary.