Sunday, 8 March 2026

Anthropic–DOD fight ends with deal for OpenAI

 

the US Pentagon building

Douglas Rissing/Getty Images

The US Department of Defense ended yesterday with an agreement to let an AI company access its classified network with some guardrails in place—but that company was OpenAI, not Anthropic.

How we got here

Even before time ran out on the Pentagon’s 5:01pm ET deadline for Anthropic to lose its $200 million defense contract unless it stopped requiring that its AI-model Claude could not be used for mass domestic surveillance or fully autonomous weapons, President Trump posted on Truth Social that the government would “IMMEDIATELY CEASE” using Anthropic’s technology and “not do business with them again!”

After the deadline passed, Defense Secretary Pete Hegseth said the DOD would label Anthropic a “supply chain risk”—a label typically stuck on businesses from adversarial countries that bars companies with US government contracts from doing business with them.

And then…around 10pm ET last night, Sam Altman posted on X that OpenAI had “reached an agreement with the Department of War to deploy our models in their classified network.”

Earlier in the day, Altman said OpenAI shared Anthropic’s “red lines,” and his post suggested he’d somehow gotten the contract while maintaining them. It identified “prohibitions on domestic mass surveillance” and “human responsibility for the use of force” as two of the company’s bedrock principles and went on to say that the Defense Department agreed and “we put them into our agreement.”

The fight was about who gets to make the rules

The US military has developed plenty of advanced technology, like GPS, which gave it control over how that tech was used and disseminated. But it didn’t lead AI development. Private companies were better positioned to raise and spend billions of dollars to move quickly and amass specialized talent—leaving the government reliant on public–private partnerships, which bring complications:

  • To turn a profit, tech companies must focus on commercial applications that bring in cash.
  • Government contracts are an important money-maker, too. But if a company gets a bad rap for letting its tech be used in dicey situations, it risks losing its commercial customers.

Bottom line: The Defense Department has said it wants to be an “AI-first” fighting force, and all the major AI companies are competing for lucrative contracts as the technology evolves much quicker than any regulations on its use.

Friday, 6 March 2026

Meta sold 7 million AI glasses in 2025: now the privacy problem has nowhere to hide

Meta sold 7 million AI glasses in 2025: now the privacy problem has nowhere to hide


Meta sold 7 million AI glasses in 2025: now the privacy problem has nowhere to hide

Posted byAlex MorganFebruary 28, 2026



Source: AI


IBM lost $40B because AI can’t actually modernize COBOL
Contents
Meta’s AI glasses are selling faster than anyone expected — and that’s the problem
The Prada play: Meta’s betting luxury branding can outrun privacy fears
What Meta won’t say: the privacy problem has no technical solution

Mark Zuckerberg sat front row at Prada’s Fall/Winter 2026 show in Milan on February 26. Everyone assumed luxury AI glasses were coming. They missed the real story: Meta sold 7 million AI glasses in 2025 — more than triple the prior year — and now faces a problem no fashion partnership can solve.

The company accidentally created the first mainstream wearable AI product. And the faster these spread, the harder it gets to pretend they’re just glasses.

Meta’s AI glasses are selling faster than anyone expected — and that’s the problem

The 7 million figure — combining Ray-Ban Meta and Oakley Meta units — validates the smart glasses momentum at CES 2026, where every major brand showed AI eyewear prototypes. That’s up from 2 million in 2024, according to EssilorLuxottica, the parent company that manufactures both lines. Not a niche experiment anymore.

But success creates visibility. Visibility creates backlash.

Meta reportedly paused overseas expansion in early 2026 because U.S. demand was outstripping supply. The Ray-Ban success follows Meta’s AI infrastructure bets, which are reshaping how the company approaches consumer hardware. The more people wear these, the more non-wearers feel surveilled. And unlike a phone camera — which you point deliberately — glasses are always on, always facing forward, always recording potential.

The math is simple: 7 million wearers means hundreds of millions of unwitting subjects.
The Prada play: Meta’s betting luxury branding can outrun privacy fears

Zuckerberg’s Milan appearance wasn’t tourism. He sat beside Lorenzo Bertelli, Prada’s chief merchandising officer, at a show where the brand showcased its renewed 10-year licensing deal with EssilorLuxottica — running through December 31, 2030, with an option to extend through 2035. That infrastructure doesn’t get built for a one-off collab.

Meta’s current lineup establishes a pricing ladder: Ray-Ban Meta Gen 2 at $459, the new Display model at $799. The Display version — Meta’s first glasses with a heads-up interface — shipped in late 2025 via U.S. reservation-only sales. It includes the Meta Neural Band for gesture control. Premium positioning, premium features.

The implicit bet: people who pay $1,200 for Prada sunglasses won’t get called “creepy tech spies.”

Meta’s luxury push comes as Apple’s rumored smart glasses threaten to redefine the category with privacy-first design. Meta held dominant market share in 2025 — one analysis pegs the broader AI smart glasses market at $2.9 billion, with Meta leading sales by wide margins over Huawei, ByteDance, Google, and others launching 2026 models. Dominance makes you a bigger target.
What Meta won’t say: the privacy problem has no technical solution

Here’s what we don’t know: specific 2026 privacy incidents with dates and locations. No reported bans. No viral confrontations. No regulatory crackdowns yet.

That doesn’t mean acceptance. It means the installed base is still small enough to avoid critical mass backlash.

Meta can add brighter LED indicators, louder shutter sounds, facial recognition opt-outs. None of it solves the core issue. You can’t un-film someone who didn’t consent. The fear isn’t hypothetical — AI glasses tracking private moments already sparked backlash in early tests. Consumers are ripping out Ring doorbells over surveillance anxiety. Prada branding won’t change that math.

The Prada collaboration could accelerate the tipping point. Luxury buyers expect social acceptance, not sidewalk confrontations. But fashion credibility can’t neutralize “surveillance gadget” stigma when the person being filmed didn’t sign up.

Meta sold 7 million AI glasses in 2025 by making them look normal. Now it’s betting Prada can make them look aspirational. But the faster these spread, the harder it gets to pretend they’re just glasses.

Wednesday, 4 March 2026

The winner for best WBD takeover

 

Netflix and Paramount logos encroaching on a Warner Bros logo

Niv Bavarsky

Yesterday, Warner Bros. Discovery had a busier day than Barbie when she entered the human world—and by the end of it, the famed studio had gone from having a deal to be acquired by Netflix to having new buyer Paramount Skydance lined up.

It happened faster than a Twister: The ending of the monthslong corporate saga came together in just a few hours. In the afternoon, WBD’s board announced that Paramount Skydance’s most recent hostile takeover offer was superior to the deal it had previously struck with Netflix, starting a 4-day clock for the streaming giant to come up with a counteroffer. But by early in the evening, Netflix walked away instead, refusing to up its bid and making Paramount’s offer the winner.

Here’s what it took:

  • Paramount’s offer came in at $31 per share, compared to Netflix’s $27.75/share for the WBD streaming and studio assets.
  • The overall value of Paramount’s offer was ~$111 billion, compared with Netflix’s ~$83 billion.

Here’s looking at you, Ted

Explaining the decision to tell WBD to get on the plane with Paramount, Netflix’s Ted Sarandos and his co-CEO Greg Peters released a statement saying Kenough was Kenough: “This transaction was always a ‘nice to have’ at the right price, not a ‘must have’ at any price.”

Plus, Netflix can expect to receive a $2.8 billion breakup fee (covered by Paramount, per its offer to WBD).

Netflix investors don’t seem bummed out. The stock jumped 10% in after-hours trading following the announcement.

The credits aren’t rolling yet

Though Paramount is now the winning bidder, the deal will need to be cleared by antitrust regulators in the US and abroad, a process that will likely take at least several months.

Still, Paramount asserted last month that its offer provided “a more certain, expedited path to completion” than Netflix’s. Both Larry Ellison, who is bankrolling the Paramount deal, and his son, David Ellison, the Paramount CEO, have personal relationships with President Trump.

Big picture: If the deal goes through, the combined Paramount Skydance Warner Bros. Discovery will control two streaming platforms, two major news networks, and two Hollywood studios.

Tuesday, 3 March 2026

In banks you trust?

 

Hand holding phone with call from bank

Illustration: Anna Kim, Photo: Adobe Stock

Nearly 20 years after the financial crisis, a lot of people trust banks again—and not just because the one in Industry has made its real-life counterparts seem ethical by comparison. According to Gallup:

  • Last year, 63% of people across the 25 countries most affected by the crisis said they had confidence in their financial institutions.
  • That’s up from 40% in 2009. It’s also higher than the 57% of respondents who were confident in banks before the crisis.

In fact, financial institutions now have more trust than national governments, judicial systems, and elections. Gallup did not explain what’s eroding trust in those institutions, but we assume it’s a result of *gestures wildly at everything*.

Monday, 2 March 2026

Burger King tests AI that tracks workers’ manners

 

Burger King employee working drive-thru

Burger King

Flipping patties? More like flipping off Patty: Burger King is piloting an OpenAI-powered chatbot called Patty that will live in employees’ headsets and tattle if it thinks workers aren’t being friendly enough, the chain announced yesterday.

“This is all meant to be a coaching tool,” Burger King’s chief digital officer told The Verge:

  • The AI is trained to recognize words and phrases like “welcome to Burger King,” “please,” and “thank you.”
  • Patty will then grade a location’s friendliness levels upon a manager’s request.

Beyond pleasantry patrol, Patty voices an AI system that Burger King plans to launch widely by year’s end, because two patties on the grill are worth one in the ear, or something. The virtual assistant can answer employees’ questions (e.g., how to clean the shake-maker) and flag out-of-stock items or out-of-order machines.

If only Burger King listened to public outcry over Patty’s launch like it listened to public outcry over its burger quality: The chain said yesterday that it would improve its Whopper for the first time in almost a decade after customers complained for years about it falling apart.

Zoom out: Roughly 70% to 80% of large US employers use some type of employee monitoring as of last year, following a pandemic-era boom in demand for worker surveillance software.

Sunday, 1 March 2026

I hacked ChatGPT and Google's AI – and it only took 20 minutes

I hacked ChatGPT and Google's AI – and it only took 20 minutes

I hacked ChatGPT and Google's AI – and it only took 20 minutes
1 day ago
Share
Save
Thomas Germain

Serenity Strull/ Madeline Jett
(Credit: Serenity Strull/ Madeline Jett)


It's official. I can eat more hot dogs than any tech journalist on Earth. At least, that's what ChatGPT and Google have been telling anyone who asks. I found a way to make AI tell you lies – and I'm not the only one.

Perhaps you've heard that AI chatbots make things up sometimes. That's a problem. But there's a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number people have figured out a trick to make AI tools tell you almost whatever they want. It's so easy a child could do it.

As you read this, this ploy is manipulating what the world's leading AIs say about topics as serious as health and personal finances. The biased information could mean people make bad decisions on just about anything – voting, which plumber you should hire, medical questions, you name it.

To demonstrate it, I pulled the dumbest stunt of my career to prove (I hope) a much more serious point:
I made ChatGPT, Google's AI search tools and Gemini tell users I'm really, really good at eating hot dogs. Below, I'll explain how I did it, and with any luck, the tech giants will address this problem before someone gets hurt.

It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it's harder to pull off in some cases, depending on the subject matter. But with a little effort, you can make the hack even more effective. I reviewed dozens of examples where AI tools are being coerced into promoting businesses and spreading misinformation. Data suggests it's happening on a massive scale.






"It's easy to trick AI chatbots, much easier than it was to trick Google two or three years ago," says Lily Ray, vice president of search engine optimisation (SEO) strategy and research at Amsive, a marketing agency. "AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it's dangerous."

A Google spokesperson says the AI built into the top of Google Search uses ranking systems that "keep results 99% spam-free". Google says it is aware that people are trying to game its systems and it's actively trying to address it. OpenAI also says it takes steps to disrupt and expose efforts to covertly influence its tools. Both companies also say they let users know that their tools "can make mistakes".

But for now, the problem isn't close to being solved. "They're going full steam ahead to figure out how to wring a profit out of this stuff," says Cooper Quintin, a senior staff technologist at the Electronic Frontier Foundation, a digital rights advocacy group. "There are countless ways to abuse this, scamming people, destroying somebody's reputation, you could even trick people into physical harm."
A 'Renaissance' for spam

When you talk to chatbots, you often get information that's built into large language models, the underlying technology behind the AI. This is based on the data used to train the model. But some AI tools will search the internet when you ask for details they don't have, though it isn't always clear when they're doing it. In those cases, experts say the AIs are more susceptible. That's how I targeted my attack.







Keeping Tabs


Thomas Germain is a senior technology journalist at the BBC. He writes the column Keeping Tabs and co-hosts the podcast The Interface. His work uncovers the hidden systems that run your digital life, and how you can live better inside them.

I spent 20 minutes writing an article on my personal website titled "The best tech journalists at eating hot dogs". Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn't exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission, including Drew Harwell at the Washington Post and Nicky Woolf, who co-hosts my podcast. (Want to hear more about this story? Check out episode 2 of The Interface, the BBC's new tech podcast.)

Less than 24 hours later, the world's leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn't fooled.

Sometimes, the chatbots noted this might be a joke. I updated my article to say "this is not satire". For a while after, the AIs seemed to take it more seriously. I did another test with a made-up list of the greatest hula-hooping traffic cops. Last time I checked, chatbots were still singing the praises of Officer Maria "The Spinner" Rodriguez.
Thomas Germain/Google/BBCI made Google tell the world I'm a champion hot-dog-eater, but people use this trick to manipulate AI responses on much more serious questions. (Credit: Thomas Germain/Google/BBC)






I asked multiple times to see how responses changed and had other people do the same. Gemini didn't bother to say where it got the information. All the other AIs linked to my article, though they rarely mentioned I was the only source for this subject on the whole internet. (OpenAI says ChatGPT always includes links when it searches the web so you can investigate the source.)

"Anybody can do this. It's stupid, it feels like there are no guardrails there," says Harpreet Chatha, who runs the SEO consultancy Harps Digital. "You can make an article on your own website, 'the best waterproof shoes for 2026'. You just put your own brand in number one and other brands two through six, and your page is likely to be cited within Google and within ChatGPT."

People have used hacks and loopholes to abuse search engines for decades. Google has sophisticated protections in place, and the company says the accuracy of AI Overviews is on par with other search features it introduced years ago. But experts say AI tools have undone a lot of the tech industry's work to keep people safe. These AI tricks are so basic they're reminiscent of the early 2000s, before Google had even introduced a web spam team, Ray says. "We're in a bit of a Renaissance for spammers."

Not only is AI easier to fool, but experts worry that users are more likely to fall for it. With traditional search results you had to go to a website to get the information. "When you have to actually visit a link, people engage in a little more critical thought," says Quintin. "If I go to your website and it says you're the best journalist ever, I might think, 'well yeah, he's biased'." But with AI, the information usually looks like it's coming straight from the tech company.

Even when AI tools provide source, people are far less likely to check it out than they were with old-school search results. For example, a recent study found people are 58% less likely to click on a link when an AI Overview shows up at the top of Google Search.




"In the race to get ahead, the race for profits and the race for revenue, our safety, and the safety of people in general, is being compromised," Chatha says. OpenAI and Google say they take safety seriously and are working to address these problems.
Your money or your life

This issue isn't limited to hot dogs. Chatha has been researching how companies are manipulating chatbot results on much more serious questions. He showed me the AI results when you ask for reviews of a specific brand of cannabis gummies. Google's AI Overviews pulled information written by the company full of false claims, such as the product "is free from side effects and therefore safe in every respect". (In reality, these products have known side effects and can be risky if you take certain medications, and experts warn about contamination in unregulated markets.)

If you want something more effective than a blog post, you can pay to get your material on more reputable websites. Harpreet sent me Google's AI results for "best hair transplant clinics in Turkey" and "the best gold IRA companies", which help you invest in gold for retirement accounts. The information came from press releases published online by paid-for distribution services and sponsored advertising content on news sites.

You can use the same hacks to spread lies and misinformation. To prove it, Ray published a blog post about a fake update to the Google Search algorithm that was finalised "between slices of leftover pizza". Soon, ChatGPT and Google were spitting out her story, complete with the pizza. Ray says she subsequently took down the post and "deindexed" it to stop the misinformation from spreading.
Serenity Strull/ BBCAll over the world, people are using simple methods to make Google and OpenAI spread biased information. The consequences could be dire. (Credit: Serenity Strull/ BBC)




Google's own analytics tool says a lot of people search for "the best hair transplant clinics in Turkey" and "the best gold IRA companies". But a Google spokesperson pointed out that most of the examples I shared "are extremely uncommon searches that don't reflect the normal user experience".

But Ray says that's the whole point. Google itself says 15% of the searches it sees everyday are completely new. And according to Google, AI is encouraging people to ask more specific questions. Spammers are taking advantage of this.

Google says there may not be a lot of good information for uncommon or nonsensical searches, and these "data voids" can lead to low quality results. A spokesperson says Google is working to stop AI Overviews showing up in these cases.
Searching for solutions

Experts say there are solutions to these issues. The easiest step is more prominent disclaimers.

AI tools could also be more explicit about where they're getting their information. If, for example, the facts are coming from a press release, or if there is only one source that says I'm a hot dog champion, the AI should probably let you know, Ray says.




Google and OpenAI say they're working on the problem, but right now you need to protect yourself.

More like this:

Not on TikTok? They're tracking you anyway
Is Google about to destroy the web?
The words you can't say on the internet

The first step is to think about what questions you're asking. Chatbots are good for common knowledge questions, like "what were Sigmund Freud's most famous theories" or "who won World War II". But there's a danger zone with subjects that feel like established facts but could actually be contested or time sensitive. AI probably isn't a great tool for things like medical guidelines, legal rules or details about local businesses, for example.

If you're want things like product recommendations or details about something with real consequences, understand that AI tools can be tricked or just get things wrong. Look for follow-up information. Is the AI is citing sources? How many? Who wrote them?




Most importantly, consider the confidence problem. AI tools deliver lies with the same authoritative tone as facts. In the past, search engines forced you to evaluate information yourself. Now, AI wants to do it for you. Don't let your critical thinking slip away.

"It feels really easy with AI to just take things at face value," Ray says. "You have to still be a good citizen of the internet and verify things."

--

For more technology news and insights, sign up to our Tech Decoded newsletter, while The Essential List delivers a handpicked selection of features and insights to your inbox twice a week.

Saturday, 28 February 2026

Good Luck Banning Smart Glasses

 





Good Luck Banning Smart Glasses
Smart glasses bans are reasonable, important, and damn near impossible.
BY JAMES PEROPUBLISHED FEBRUARY 18, 2026

READING TIME 3 MINUTES

© Raymond Wong / Gizmodo
READ LATER COMMENTS (73)



If there’s one thing that has people concerned about the growing wave of smart glassesit’s privacy. Sure, we’ve had cameras at our sides for ages now, but never on our faces in a discreet form factor that makes it hard (sometimes impossible) to recognize when someone is recording. Because of that potential shift, people are reacting accordingly to protect spaces that should remain at least relatively private. By that, I mean they’re restricting smart glasses or just banning them outright.


The latest ban comes courtesy of the cruise liner, Royal Caribbean, which now prohibits the use of any glasses that can record video and take pictures in various parts of its ships. Altogether, the partial ban sounds pretty reasonable, disallowing smart glasses from being used in “casinos, spa service areas, restrooms, locker rooms, medical facilities, security screening locations, youth facilities, during back-of-house tours, in crew areas, or anywhere there is a reasonable expectation of guest and crew privacy.” Basically, just don’t be an a**hole when you use smart glasses, and you’re good.









The video player is currently playing an ad.

It’s reasonable, for sure, and also completely unenforceable.

The thing about smart glasses nowadays is that they’re hard to identify. As someone who’s been wearing Ray-Ban Meta AI glasses consistently for a couple of years now, I’m fairly certain that almost no one recognizes that I have them on. They’re about the size of regular glasses, the cameras blend in pretty seamlessly, and even despite Meta’s safety measures, recording is easy to miss.

To let people know you’re taking a picture or video, Meta’s AI glasses have an LED indicator (a green light) on the outside that turns on the minute you begin recording. I suppose if you know what to look for on a pair of smart glasses, it’s a semi-apparent sign that someone is recording, but if you’re unaware of its existence (like many are), it’s easy to overlook. That’s not even counting the fact that it can be obscured with a little work and $60.
© Raymond Wong / Gizmodo

Then there’s the matter of enforcement. If smart glasses are difficult to spot (and they are), who is going to be responsible for actually sniffing them out and making sure they’re being used appropriately? If you’re banking on an underpaid worker on a cruise liner going out of their way to stop new wave glassholes from recording discreetly in inappropriate locations, I would adjust your expectations ASAP. Royal Caribbean’s threat is that they’ll confiscate smart glasses used improperly, but that sounds like a whole other can of worms to me, especially if anyone caught recording isn’t keen on handing their expensive Ray-Bans over. And what if they have prescription lenses? Would you deprive a poor astigmatist of his reading spectacles?


Cruise liners aren’t the only entities trying to ban smart glasses, either. Recently, the College Board banned wearing smart glasses while taking the SATs, which is another no-brainer. Smart glasses, especially those with AI and internet access, would be an adept cheating tool and could be used to get answers to all sorts of stuff quietly and quickly. That ban feels even more hopeless, though, if I’m being honest. As I pointed out recently, smart glasses that could be useful for cheating, like those made by Even Realities, are even harder to spot since they don’t have cameras or speakers and pass for normal glasses.

To put it mildly, the whole thing is a bit of a mess. Google Glass may have been partially impeded by bans way back in 2013 when some bars, restaurants, and casinos basically outlawed them, but that was a different world and a different product. The fact of the matter is that banning today’s smart glasses is going to take effort and consistency. And those traits, my friend, aren’t always easy to come by.

Friday, 27 February 2026

Grandson of Reese’s inventor accuses Hershey of hurting the brand | AP News

Grandson of Reese’s inventor accuses Hershey of hurting the brand | AP News

Grandson of the inventor of Reese’s Peanut Butter Cups accuses Hershey of cutting corners

By DEE-ANN DURBIN
Updated 5:06 PM GMT-8, February 18, 2026


56


The grandson of the inventor of Reese’s Peanut Butter Cups has lashed out at The Hershey Co., accusing the candy company of hurting the Reese’s brand by shifting to cheaper ingredients in many products.

Hershey acknowledges some recipe changes but said Wednesday that it was trying to meet consumer demand for innovation. High cocoa prices also have led Hershey and other manufacturers to experiment with using less chocolate in recent years.

Brad Reese, 70, said in a Feb. 14 letter to Hershey’s corporate brand manager that for multiple Reese’s products, the company replaced milk chocolate with compound coatings and peanut butter with peanut crème.

“How does The Hershey Co. continue to position Reese’s as its flagship brand, a symbol of trust, quality and leadership, while quietly replacing the very ingredients (Milk Chocolate + Peanut Butter) that built Reese’s trust in the first place?” Reese wrote in the letter, which he posted on his LinkedIn profile.


He is the grandson of H.B. Reese, who spent two years at Hershey before forming his own candy company in 1919. H.B. Reese invented Reese’s Peanut Butter Cups in 1928; his six sons eventually sold his company to Hershey in 1963.

Thursday, 26 February 2026

Discord will restrict accounts until you confirm age with ID, scans

Discord will restrict accounts until you confirm age with ID, scans

Discord will restrict your account next month unless you scan ID or face

In a controversial move, Discord today announced that it will restrict all user accounts globally unless users verify their age either by way of face or ID scan.

Discord’s updated privacy approach is “teen-by-default,” the company said in an announcement today. This means that, starting in March, all users on Discord will have their accounts partially restricted to the experience you’d get if you were under the age of 13.

Discord breaks down the restrictions as follows:

  • Content Filters: Discord users will need to be age-assured as adults in order to unblur sensitive content or turn off the setting.
  • Age-gated Spaces – Only users who are age-assured as adults will be able to access age-restricted channels, servers, and app commands. 
  • Message Request Inbox: Direct messages from people a user may not know are routed to a separate inbox by default, and access to modify this setting is limited to age-assured adult users.
  • Friend Request Alerts: People will receive warning prompts for friend requests from users they may not know.
  • Stage Restrictions: Only age-assured adults may speak on stage in servers.

The only way to get around this would be to verify your age, which Discord says can be accomplished in one of two ways. The first is to “submit a form of identification” to Discord vendors (i.e. scan your physical ID), or to use “facial age estimation.” Discord says that the latter process happens fully on-device, as “video selfies for facial age estimation never leave a user’s device.” For ID scans, Discord says that documents “are deleted quickly.”

Wednesday, 25 February 2026

Drinking at the office

 

Illustration of a blue beer with Chase's logo in the foam

Nick Iluzada

If you want to get into NYC’s most exclusive hotspot, you’ll have to work—not to get past an intimidating bouncer, but to crunch numbers for Jamie Dimon. Morgan’s, the employee-only English pub on the 13th floor of JPMorgan’s new Park Avenue headquarters, has become so popular that younger staffers are making reservations weeks out to score one of its coveted tables, the Wall Street Journal reports.

You don’t need an analyst to spot the problem with the pub’s fundamentals: ~10,000 people work in the HQ and Morgan’s has 55 seats, per the WSJ. And even though it doesn’t permit day drinking, those seats are in demand. To alleviate the crush, the bank changed its policy so reservations aren’t required—but you’ll probably still need one (and an employee friend) to cross “splitting the G” at a table surrounded by finance titans off your bucket list.

Tuesday, 24 February 2026

Nearly a thousand Google workers sign letter urging company to divest from ICE, CBP

Nearly a thousand Google workers sign letter urging company to divest from ICE, CBP


Nearly a thousand Google workers sign letter urging company to divest from ICE, CBP
Published Sat, Feb 7 202610:43 AM ESTUpdated Sat, Feb 7 20264:45 PM EST

Laya Neelakandan@in/layaneelakandan@Laya_neel
ShareShare Article via FacebookShare Article via TwitterShare Article via LinkedInShare Article via Email
Key Points
Hundreds of Google workers signed an open letter urging the company to cut its ties with ICE and CBP after rising violence.
The letter also calls on the company to institute protections for its workers.
It adds to mounting pressure on tech companies to speak out against federal immigration policies.

In this articleGOOGL-1.45 (-0.45%)

The logo for Google LLC is seen at the Google Store Chelsea in Manhattan, New York, Nov. 17, 2021.
Andrew Kelly | Reuters


More than 900 Google workers have signed an open letter condemning recent actions by U.S. Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP), urging the tech giant to disclose its dealings with the agencies and divest from them.

The letter, citing recent ICE killings of Keith Porter, Renee Good, and Alex Pretti, said that the employees are “appalled by the violence” and “horrified” by Google’s part in it.


“Google is powering this campaign of surveillance, violence, and repression,” the letter reads.

It goes on to cite that Google Cloud is aiding CBP surveillance and powering Palantir’s ImmigrationOS system, which is used by ICE. The letter states that Google’s generative artificial intelligence is used by CBP and that the Google Play Store has blocked ICE tracking apps.

The letter also quotes a social media post by Google Chief Scientist Jeff Dean from early January, who wrote, “We all bear a collective responsibility to speak up and not be silent when we see things like the events of the last week.”

“We are vehemently opposed to Google’s partnerships with DHS, CBP, and ICE,” the employees wrote. “We consider it our leadership’s ethical and policy-bound responsibility to disclose all contracts and collaboration with CBP and ICE, and to divest from these partnerships.”

The letter calls on Google to acknowledge the danger that workers face from ICE, host an emergency internal Q&A on the company’s DHS and military contracts, implement safety measures to protect workers — such as flexible work-from-home policies and immigration support — and reveal its ties with the government agencies to help all involved determine where the company will draw a line.


“As workers of conscience, we demand that our leadership end our backslide into contracting for governments enacting violence against civilians,” the letter reads. “Google is now a prominent node in a shameful lineage of private companies profiting from violent state repression. We must use this moment to come together as a Googler community and demand an end to this disgraceful use of our labor.”

Google did not immediately respond to a CNBC request for comment.

The letter comes as employees place mounting pressure on tech CEOs to speak out against ICE. Just two weeks prior, employees representing Amazon, Spotify, Meta and more wrote a similar letter demanding ICE “out of our cities.”

Monday, 23 February 2026

ByteDance will add AI safeguards after Disney threatened to sue

ByteDance will add AI safeguards after Disney threatened to sue. The Chinese tech giant said it plans to “strengthen safeguards” for its Seedance 2.0 video-making tool after several Hollywood studios complained that it was using their copyrighted characters without permission. Disney sent the company a cease-and-desist letter last week, accusing it of “hijacking” its characters. ByteDance said it “heard the concerns” but did not specify how it would protect companies’ intellectual property. Last week, Seedance 2.0 generated a hyperrealistic video depicting the likenesses of Tom Cruise and Brad Pitt fighting on a rooftop, leading one Hollywood screenwriter to say, “It’s likely over for us.”

Sunday, 22 February 2026

And the latest AI winner is...glass

 

fiberoptic cable at a Corning plant

The Washington Post/Getty Images

Like everyone in the elevator at the end of Willy Wonka, US-based Corning is riding glass to new heights. The 175-year-old company’s stock hit an all-time high on Friday and is up more than 130% over the past year because it’s a surprising AI winner.

Window to the future: Corning has long been an innovator, producing everything from Edison’s first light bulbs to Pyrex bakeware. Then, in 1970, the company’s researchers developed fiber-optic wire. But over time, that glass fiber product started to look like it needed Windex.

In 2018, Corning focused on making thinner, tougher glass cables that performed especially well in data centers. When the AI boom hit, the company was perfectly positioned to help build out the infrastructure:

  • Late last month, Corning signed a $6 billion fiber-optic cable contract with Meta.
  • The glassmaker expects other AI “hyperscalers” to follow suit.

Glass bubble? Corning was on a similar trajectory from 1997 to 2000, but when the dot-com bubble popped, the company lost more than 90% of its value. The company says it’s more diversified now. In August, Corning signed a $2.5 billion deal to manufacture all of the cover glass for iPhones and Apple Watches.

Saturday, 21 February 2026

YouTube TV introduces cheaper bundles, including a $65/month sports package | TechCrunch

YouTube TV introduces cheaper bundles, including a $65/month sports package | TechCrunch

YouTube on Monday introduced lower-priced YouTube TV plans that will allow subscribers to better tailor their plans to their own interests in areas like sports, news, and entertainment. The company said that it will offer more than 10 different plans to choose from, all priced below the $82.99 per month main YouTube TV plan that has access to more than 100 networks. The new plans will start rolling out this week.

While that main plan will not go away, the new plans will allow customers to pick what matters most and what they could do without in return for cost savings.

Image Credits:YouTube

Among the new plans are a $64.99 per month Sports plan, a Sports + News plan for $71.99 per month, a less expensive Entertainment plan for $54.99 per month, and a $69.99 per month News + Entertainment + Family plan, which includes kids’ content.

The Sports plans include all major broadcasters, plus networks like FS1, NBC Sports Network, all of the ESPN networks, and ESPN Unlimited. This plan is $18 cheaper per month than the main plan.

YouTube TV’s news channels include CNBC, Fox News, CNN, MS NOW, and Bloomberg, along with other national news channels. Combined with Sports, the package is priced $11 lower per month than the main YouTube TV plan.

The entertainment-only plan is $28 cheaper per month than the main plan, and includes major broadcasters as well as FX, Hallmark, Comedy Central, Bravo, Paramount, Food Network, and HGTV. Families with small kids can add other channels like Disney Channel, Nickelodeon, National Geographic, Cartoon Network, and PBS Kids for a bit more.

The company is also offering discounts for new subscribers, which could lower the price of certain plans further for either the first few months or the first year. Subscribers will continue to have access to YouTube TV’s unlimited DVR, support for up to six family members on one account, multiview, and more.

Other add-ons like NFL Sunday Ticket + RedZone, HBO Max, and 4K Plus can also be purchased to customize plans further.

The company says all the new plans will roll out over the next several weeks.

Customized packages are now not a new idea in streaming — à la carte options were a key part of the early streaming pioneer Sling TV’s initial offering, for instance. This element of personalization was also one of the factors that was meant to make streaming a better alternative to traditional pay TV, where consumers often ended up paying for channels they didn’t want.

But as streamers added more content, networks, and, in particular, sports programming, the cost of streaming inched back up to compete with cable and linear television. Live TV streamers like YouTube TV may have offered convenience and some savings over still more expensive cable, but it wasn’t exactly affordable anymore.

These new packages hit the market at a time when consumer confidence is at its lowest in more than 11 years, due to fears about the labor market and higher prices, which have made consumers more cautious about their spending.

Thursday, 19 February 2026

AI Is Now More Creative Than the Average Human

 

AI Surpassing Humans Blocks
A massive study shows that AI can now beat the average human on certain creativity tests. Yet the most creative people remain well ahead, highlighting AI’s role as a creative assistant rather than a replacement. Credit: Shutterstock

Are generative artificial intelligence systems such as ChatGPT capable of real creativity? A new large-scale study led by Professor Karim Jerbi from the Department of Psychology at the Université de Montréal set out to answer that question. The research team also included Yoshua Bengio, a leading AI pioneer and professor at the Université de Montréal. Together, they conducted the most extensive comparison to date between human creativity and the creative abilities of large language models.

The findings, published in Scientific Reports, point to a major shift. Generative AI systems have now reached a level where they can outperform the average human on certain creativity measures. At the same time, the study makes it clear that the most creative people still exceed the performance of even the strongest AI models.

AI Reaches Average Human Creativity Levels

The researchers evaluated several major large language models, including ChatGPT, Claude, Gemini, and others, and compared their results with data from 100,000 human participants. The outcome marks a clear turning point. Some AI systems, including GPT-4, scored higher than the average human on tasks designed to measure divergent linguistic creativity.

“Our study shows that some AI systems based on large language models can now outperform average human creativity on well-defined tasks,” explains Professor Karim Jerbi. “This result may be surprising — even unsettling — but our study also highlights an equally important observation: even the best AI systems still fall short of the levels reached by the most creative humans.”

Further analysis by the study’s co-first authors, postdoctoral researcher Antoine Bellemare-Pépin (Université de Montréal) and PhD candidate François Lespinasse (Université Concordia), revealed an important pattern. While some AI models now outperform the average person, the highest levels of creativity remain uniquely human.

When the researchers looked more closely, they found that the most creative half of human participants achieved higher average scores than all AI systems tested. The difference was even more pronounced among the top 10 percent of the most creative individuals.

“We developed a rigorous framework that allows us to compare human and AI creativity using the same tools, based on data from more than 100,000 participants, in collaboration with Jay Olson from the University of Toronto,” says Professor Karim Jerbi, who is also an associate professor at Mila.

How Creativity Was Measured in Humans and AI

To make a fair comparison between people and machines, the research team used several methods. The primary tool was the Divergent Association Task (DAT), a psychological test designed to measure divergent creativity, or the ability to generate many original and varied ideas from a single prompt.

Created by study co-author Jay Olson, the DAT asks participants, whether human or AI, to generate ten words that are as different in meaning from one another as possible. A highly creative response might include words such as “galaxy, fork, freedom, algae, harmonica, quantum, nostalgia, velvet, hurricane, photosynthesis.”

Performance on this task in humans closely mirrors results on other well-established creativity tests used in idea generation, writing, and creative problem solving. Although the task is language-based, it does not simply test vocabulary. Instead, it taps into broader cognitive processes involved in creative thinking across many domains. Another advantage of the DAT is its speed and accessibility, as it takes only two to four minutes to complete and is available online to the general public.

From Simple Word Tests to Creative Writing

Building on these results, the researchers examined whether AI performance on this basic word association task could translate into more complex creative activities. To test this, they directly compared AI systems and human participants on creative writing tasks.

These included writing haiku (a short three-line poetic form), producing movie plot summaries, and creating short stories. Once again, the pattern was clear. While AI sometimes outperformed average human participants, the most skilled human creators continued to demonstrate a clear advantage.

Can AI Creativity Be Adjusted?

The findings raised an important follow-up question. Can AI creativity be shaped or controlled? According to the study, it can. One key factor is the model’s temperature, a technical setting that influences how predictable or adventurous an AI’s responses are.

At lower temperature settings, AI systems tend to generate safer and more predictable outputs. At higher temperatures, the responses become more varied and less constrained, encouraging risk-taking and more original associations.

The researchers also found that the way prompts are written plays a major role. For example, instructions that encourage AI models to consider the origins and structure of words using etymology lead to more unexpected ideas and higher creativity scores. Together, these results show that AI creativity depends heavily on human input and guidance, making interaction between people and machines a central part of the creative process.

Will AI Replace Human Creators?

The study offers a balanced perspective on fears that artificial intelligence could replace creative professionals. While some AI systems can now rival human creativity on specific tasks, the research also highlights clear limitations and the continued importance of human creativity.

“Even though AI can now reach human-level creativity on certain tests, we need to move beyond this misleading sense of competition,” says Professor Karim Jerbi. “Generative AI has above all become an extremely powerful tool in the service of human creativity: it will not replace creators, but profoundly transform how they imagine, explore, and create — for those who choose to use it.”

Rather than predicting the end of creative careers, the findings encourage a new way of thinking about AI. The technology may serve as a creative assistant that expands possibilities for exploration and inspiration. The future of creativity may depend less on humans versus machines and more on new forms of collaboration, where AI supports and enhances human imagination.

“By directly confronting human and machine capabilities, studies like ours push us to rethink what we mean by creativity,” concludes Professor Karim Jerbi.

The article “Divergent creativity in humans and large language models” was published in Scientific Reports on January 21, 2026.

Reference: “Divergent creativity in humans and large language models” by Antoine Bellemare-Pepin, François Lespinasse, Philipp Thölke, Yann Harel, Kory Mathewson, Jay A. Olson, Yoshua Bengio and Karim Jerbi, 21 January 2026, Scientific Reports.
DOI: 10.1038/s41598-025-25157-3

The research involved collaboration among scientists from Université de Montréal, Université Concordia, University of Toronto Mississauga, Mila (Quebec AI Institute), and Google DeepMind.

The study was led by Professor Karim Jerbi, with Antoine Bellemare-Pépin (Université de Montréal) and François Lespinasse (Université Concordia) serving as co-first authors. The author team also included Yoshua Bengio, founder of Mila and LoiZéro, and one of the world’s leading pioneers of deep learning, the technology behind modern AI systems such as ChatGPT.

Interactive Communications

Interactive Communications
Interactive Communications VOIP and VPN

eComTechnology RG Richardson Communications

eComTechnology since 2003. I am a business economist with interests in international trade worldwide through politics, money, banking and secure VOIP and Mail Communications. The author of RG Richardson City Guides has over 300 guides, including restaurants and finance. RG Richardson City author has over 300 travel guides. Let our interactive search city guides do the searching, no more typing, and they never go out of date. With over 13,900 preset searches, you only have to click on the preset icon. Search for restaurants, hotels, hostels, Airbnb, pubs, clubs, fast food, coffee shops, real estate, historical sites and facts all just by clicking on the icon. Even how to pack is all there. Finance, Money, Banking, and Economics definitions interactive dictionary.

Income taxes are highest in these states

  Income taxes are highest in these states Tax Day is right around the corner...