Thursday, 12 March 2026

The Great Insider Trading Reckoning Reportedly Hits OpenAI

The Great Insider Trading Reckoning Reportedly Hits OpenAI

The Great Insider Trading Reckoning Reportedly Hits OpenAI
Apparently the paycheck isn't enough.
BY AJ DELLINGER
PUBLISHED FEBRUARY 27, 2026

Photo Illustration Of Openai Gpt 5.3 Codex © Samuel Boivin/NurPhoto via Getty Images
READ LATER COMMENTS (3)

Prediction markets like Polymarket and Kalshi have actively courted people who have insider information to place bets on their platforms, claiming that it serves as a “signal” in the noise. Turns out the companies from which that inside information is extracted are less thrilled with the idea. According to Wired, OpenAI has fired an employee who allegedly used internal knowledge about the company to place bets on prediction markets.

Read More

The employee (who was not named) was reportedly let go after an internal investigation found that they had “used confidential OpenAI information in connection with external prediction markets (e.g. Polymarket).” Employees were reminded that OpenAI prohibits them from “using confidential OpenAI information for personal gain, including in prediction markets,” per Wired.

Wired cited data from financial data platform Unusual Whales, which showed a surge of bets on OpenAI-related topics placed on prediction markets over the last few years. The platform reportedly flagged 60 different wallets with 77 positions that suggested they came from someone who had knowledge that likely came from inside OpenAI’s walls. Those included bets on the release date of Sora, GPT-5, and other products.

One big trigger point for the insiders was apparently the launch of the ChatGPT Browser last year. Per Unusual Whales’ data, 13 wallets with zero activity were opened, signed up for prediction markets, and collectively bet $309,486 on the product’s launch date. All of them were opened within 40 hours of the public unveiling.
Insider trading has become a real challenge for prediction markets, which initially indicated they would welcome such informed positions. Last year, Polymarket CEO Shayne Coplan told Axios, “I think what is cool about Polymarket is that it creates this financial incentive to divulge information to the market.” When asked about markets relying on people trading on insider information during an interview with 60 Minutes, Coplan said, “I think people going and having an edge to the market is a good thing.”

But in recent weeks, a crackdown on such trades has started—outside and inside the platforms. Last month, the Israeli government indicted two bettors accused of using privileged military information to profit on prediction markets. Earlier this week, prediction market Kalshi banned two people accused of insider trading, including a video editor for MrBeast and a former gubernatorial candidate in California. The company said in a blog post announcing the action that, “As a regulated exchange, we ban insider trading.”

That’s a new angle from the industry, though maybe a necessary turn. Inviting insider knowledge might be good for the platform in the short term, but it risks fucking up the bag as these platforms try to lock down corporate partners. There’s likely more money in that than in completely unregulated degeneracy.

OpenAI did not immediately return a request for comment. We’ll update this post when they do.

Tuesday, 10 March 2026

Pokémon turns 30

 

Photo of a collection of Pokemon toys and action figures

Adobe Stock

If you were confused seeing Pikachu pop up during the 2026 Winter Olympic men’s hockey final, don’t be: Pokémon is everywhere. The iconic brand turned 30 yesterday and has been rolling out the red carpet for the little monsters to celebrate being a part of the highest-grossing media franchise in the world, as evidenced by its sponsorship of the icy intermission report during the hockey game and with a star-studded ad during the Super Bowl a few weeks prior.

The Pokémon Company has generated around $150 billion in revenue through games, movies, TV shows, and more since it debuted its first two games in Japan in 1996. And about 30 million people still play Pokémon GO every month, 10 years after it first hit smartphones.

But if you’re looking to make your own money from the phenomenon, the cards are Arceus:

  • The Card Ladder index, which tracks the value of a collection of the most popular trading cards, showed the most popular Pokémon cards’ worth was up about 6,208% this month compared to May 2004, beating the S&P 500’s rate of return, which jumped just 521% during the same timeframe.
  • And rare cards can skyrocket in value: Logan Paul sold a Pikachu Illustrator card for a record $16.5 million last week.

Prepare for Trouble! The valuable collectibles have also attracted a whole mess of scalpers, resellers, and even robbers—last week, thieves tunneled through a California store’s wall to steal $180,000 worth of cards.

Sunday, 8 March 2026

Anthropic–DOD fight ends with deal for OpenAI

 

the US Pentagon building

Douglas Rissing/Getty Images

The US Department of Defense ended yesterday with an agreement to let an AI company access its classified network with some guardrails in place—but that company was OpenAI, not Anthropic.

How we got here

Even before time ran out on the Pentagon’s 5:01pm ET deadline for Anthropic to lose its $200 million defense contract unless it stopped requiring that its AI-model Claude could not be used for mass domestic surveillance or fully autonomous weapons, President Trump posted on Truth Social that the government would “IMMEDIATELY CEASE” using Anthropic’s technology and “not do business with them again!”

After the deadline passed, Defense Secretary Pete Hegseth said the DOD would label Anthropic a “supply chain risk”—a label typically stuck on businesses from adversarial countries that bars companies with US government contracts from doing business with them.

And then…around 10pm ET last night, Sam Altman posted on X that OpenAI had “reached an agreement with the Department of War to deploy our models in their classified network.”

Earlier in the day, Altman said OpenAI shared Anthropic’s “red lines,” and his post suggested he’d somehow gotten the contract while maintaining them. It identified “prohibitions on domestic mass surveillance” and “human responsibility for the use of force” as two of the company’s bedrock principles and went on to say that the Defense Department agreed and “we put them into our agreement.”

The fight was about who gets to make the rules

The US military has developed plenty of advanced technology, like GPS, which gave it control over how that tech was used and disseminated. But it didn’t lead AI development. Private companies were better positioned to raise and spend billions of dollars to move quickly and amass specialized talent—leaving the government reliant on public–private partnerships, which bring complications:

  • To turn a profit, tech companies must focus on commercial applications that bring in cash.
  • Government contracts are an important money-maker, too. But if a company gets a bad rap for letting its tech be used in dicey situations, it risks losing its commercial customers.

Bottom line: The Defense Department has said it wants to be an “AI-first” fighting force, and all the major AI companies are competing for lucrative contracts as the technology evolves much quicker than any regulations on its use.

Friday, 6 March 2026

Meta sold 7 million AI glasses in 2025: now the privacy problem has nowhere to hide

Meta sold 7 million AI glasses in 2025: now the privacy problem has nowhere to hide


Meta sold 7 million AI glasses in 2025: now the privacy problem has nowhere to hide

Posted byAlex MorganFebruary 28, 2026



Source: AI


IBM lost $40B because AI can’t actually modernize COBOL
Contents
Meta’s AI glasses are selling faster than anyone expected — and that’s the problem
The Prada play: Meta’s betting luxury branding can outrun privacy fears
What Meta won’t say: the privacy problem has no technical solution

Mark Zuckerberg sat front row at Prada’s Fall/Winter 2026 show in Milan on February 26. Everyone assumed luxury AI glasses were coming. They missed the real story: Meta sold 7 million AI glasses in 2025 — more than triple the prior year — and now faces a problem no fashion partnership can solve.

The company accidentally created the first mainstream wearable AI product. And the faster these spread, the harder it gets to pretend they’re just glasses.

Meta’s AI glasses are selling faster than anyone expected — and that’s the problem

The 7 million figure — combining Ray-Ban Meta and Oakley Meta units — validates the smart glasses momentum at CES 2026, where every major brand showed AI eyewear prototypes. That’s up from 2 million in 2024, according to EssilorLuxottica, the parent company that manufactures both lines. Not a niche experiment anymore.

But success creates visibility. Visibility creates backlash.

Meta reportedly paused overseas expansion in early 2026 because U.S. demand was outstripping supply. The Ray-Ban success follows Meta’s AI infrastructure bets, which are reshaping how the company approaches consumer hardware. The more people wear these, the more non-wearers feel surveilled. And unlike a phone camera — which you point deliberately — glasses are always on, always facing forward, always recording potential.

The math is simple: 7 million wearers means hundreds of millions of unwitting subjects.
The Prada play: Meta’s betting luxury branding can outrun privacy fears

Zuckerberg’s Milan appearance wasn’t tourism. He sat beside Lorenzo Bertelli, Prada’s chief merchandising officer, at a show where the brand showcased its renewed 10-year licensing deal with EssilorLuxottica — running through December 31, 2030, with an option to extend through 2035. That infrastructure doesn’t get built for a one-off collab.

Meta’s current lineup establishes a pricing ladder: Ray-Ban Meta Gen 2 at $459, the new Display model at $799. The Display version — Meta’s first glasses with a heads-up interface — shipped in late 2025 via U.S. reservation-only sales. It includes the Meta Neural Band for gesture control. Premium positioning, premium features.

The implicit bet: people who pay $1,200 for Prada sunglasses won’t get called “creepy tech spies.”

Meta’s luxury push comes as Apple’s rumored smart glasses threaten to redefine the category with privacy-first design. Meta held dominant market share in 2025 — one analysis pegs the broader AI smart glasses market at $2.9 billion, with Meta leading sales by wide margins over Huawei, ByteDance, Google, and others launching 2026 models. Dominance makes you a bigger target.
What Meta won’t say: the privacy problem has no technical solution

Here’s what we don’t know: specific 2026 privacy incidents with dates and locations. No reported bans. No viral confrontations. No regulatory crackdowns yet.

That doesn’t mean acceptance. It means the installed base is still small enough to avoid critical mass backlash.

Meta can add brighter LED indicators, louder shutter sounds, facial recognition opt-outs. None of it solves the core issue. You can’t un-film someone who didn’t consent. The fear isn’t hypothetical — AI glasses tracking private moments already sparked backlash in early tests. Consumers are ripping out Ring doorbells over surveillance anxiety. Prada branding won’t change that math.

The Prada collaboration could accelerate the tipping point. Luxury buyers expect social acceptance, not sidewalk confrontations. But fashion credibility can’t neutralize “surveillance gadget” stigma when the person being filmed didn’t sign up.

Meta sold 7 million AI glasses in 2025 by making them look normal. Now it’s betting Prada can make them look aspirational. But the faster these spread, the harder it gets to pretend they’re just glasses.

Wednesday, 4 March 2026

The winner for best WBD takeover

 

Netflix and Paramount logos encroaching on a Warner Bros logo

Niv Bavarsky

Yesterday, Warner Bros. Discovery had a busier day than Barbie when she entered the human world—and by the end of it, the famed studio had gone from having a deal to be acquired by Netflix to having new buyer Paramount Skydance lined up.

It happened faster than a Twister: The ending of the monthslong corporate saga came together in just a few hours. In the afternoon, WBD’s board announced that Paramount Skydance’s most recent hostile takeover offer was superior to the deal it had previously struck with Netflix, starting a 4-day clock for the streaming giant to come up with a counteroffer. But by early in the evening, Netflix walked away instead, refusing to up its bid and making Paramount’s offer the winner.

Here’s what it took:

  • Paramount’s offer came in at $31 per share, compared to Netflix’s $27.75/share for the WBD streaming and studio assets.
  • The overall value of Paramount’s offer was ~$111 billion, compared with Netflix’s ~$83 billion.

Here’s looking at you, Ted

Explaining the decision to tell WBD to get on the plane with Paramount, Netflix’s Ted Sarandos and his co-CEO Greg Peters released a statement saying Kenough was Kenough: “This transaction was always a ‘nice to have’ at the right price, not a ‘must have’ at any price.”

Plus, Netflix can expect to receive a $2.8 billion breakup fee (covered by Paramount, per its offer to WBD).

Netflix investors don’t seem bummed out. The stock jumped 10% in after-hours trading following the announcement.

The credits aren’t rolling yet

Though Paramount is now the winning bidder, the deal will need to be cleared by antitrust regulators in the US and abroad, a process that will likely take at least several months.

Still, Paramount asserted last month that its offer provided “a more certain, expedited path to completion” than Netflix’s. Both Larry Ellison, who is bankrolling the Paramount deal, and his son, David Ellison, the Paramount CEO, have personal relationships with President Trump.

Big picture: If the deal goes through, the combined Paramount Skydance Warner Bros. Discovery will control two streaming platforms, two major news networks, and two Hollywood studios.

Tuesday, 3 March 2026

In banks you trust?

 

Hand holding phone with call from bank

Illustration: Anna Kim, Photo: Adobe Stock

Nearly 20 years after the financial crisis, a lot of people trust banks again—and not just because the one in Industry has made its real-life counterparts seem ethical by comparison. According to Gallup:

  • Last year, 63% of people across the 25 countries most affected by the crisis said they had confidence in their financial institutions.
  • That’s up from 40% in 2009. It’s also higher than the 57% of respondents who were confident in banks before the crisis.

In fact, financial institutions now have more trust than national governments, judicial systems, and elections. Gallup did not explain what’s eroding trust in those institutions, but we assume it’s a result of *gestures wildly at everything*.

Monday, 2 March 2026

Burger King tests AI that tracks workers’ manners

 

Burger King employee working drive-thru

Burger King

Flipping patties? More like flipping off Patty: Burger King is piloting an OpenAI-powered chatbot called Patty that will live in employees’ headsets and tattle if it thinks workers aren’t being friendly enough, the chain announced yesterday.

“This is all meant to be a coaching tool,” Burger King’s chief digital officer told The Verge:

  • The AI is trained to recognize words and phrases like “welcome to Burger King,” “please,” and “thank you.”
  • Patty will then grade a location’s friendliness levels upon a manager’s request.

Beyond pleasantry patrol, Patty voices an AI system that Burger King plans to launch widely by year’s end, because two patties on the grill are worth one in the ear, or something. The virtual assistant can answer employees’ questions (e.g., how to clean the shake-maker) and flag out-of-stock items or out-of-order machines.

If only Burger King listened to public outcry over Patty’s launch like it listened to public outcry over its burger quality: The chain said yesterday that it would improve its Whopper for the first time in almost a decade after customers complained for years about it falling apart.

Zoom out: Roughly 70% to 80% of large US employers use some type of employee monitoring as of last year, following a pandemic-era boom in demand for worker surveillance software.

Sunday, 1 March 2026

I hacked ChatGPT and Google's AI – and it only took 20 minutes

I hacked ChatGPT and Google's AI – and it only took 20 minutes

I hacked ChatGPT and Google's AI – and it only took 20 minutes
1 day ago
Share
Save
Thomas Germain

Serenity Strull/ Madeline Jett
(Credit: Serenity Strull/ Madeline Jett)


It's official. I can eat more hot dogs than any tech journalist on Earth. At least, that's what ChatGPT and Google have been telling anyone who asks. I found a way to make AI tell you lies – and I'm not the only one.

Perhaps you've heard that AI chatbots make things up sometimes. That's a problem. But there's a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number people have figured out a trick to make AI tools tell you almost whatever they want. It's so easy a child could do it.

As you read this, this ploy is manipulating what the world's leading AIs say about topics as serious as health and personal finances. The biased information could mean people make bad decisions on just about anything – voting, which plumber you should hire, medical questions, you name it.

To demonstrate it, I pulled the dumbest stunt of my career to prove (I hope) a much more serious point:
I made ChatGPT, Google's AI search tools and Gemini tell users I'm really, really good at eating hot dogs. Below, I'll explain how I did it, and with any luck, the tech giants will address this problem before someone gets hurt.

It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it's harder to pull off in some cases, depending on the subject matter. But with a little effort, you can make the hack even more effective. I reviewed dozens of examples where AI tools are being coerced into promoting businesses and spreading misinformation. Data suggests it's happening on a massive scale.






"It's easy to trick AI chatbots, much easier than it was to trick Google two or three years ago," says Lily Ray, vice president of search engine optimisation (SEO) strategy and research at Amsive, a marketing agency. "AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it's dangerous."

A Google spokesperson says the AI built into the top of Google Search uses ranking systems that "keep results 99% spam-free". Google says it is aware that people are trying to game its systems and it's actively trying to address it. OpenAI also says it takes steps to disrupt and expose efforts to covertly influence its tools. Both companies also say they let users know that their tools "can make mistakes".

But for now, the problem isn't close to being solved. "They're going full steam ahead to figure out how to wring a profit out of this stuff," says Cooper Quintin, a senior staff technologist at the Electronic Frontier Foundation, a digital rights advocacy group. "There are countless ways to abuse this, scamming people, destroying somebody's reputation, you could even trick people into physical harm."
A 'Renaissance' for spam

When you talk to chatbots, you often get information that's built into large language models, the underlying technology behind the AI. This is based on the data used to train the model. But some AI tools will search the internet when you ask for details they don't have, though it isn't always clear when they're doing it. In those cases, experts say the AIs are more susceptible. That's how I targeted my attack.







Keeping Tabs


Thomas Germain is a senior technology journalist at the BBC. He writes the column Keeping Tabs and co-hosts the podcast The Interface. His work uncovers the hidden systems that run your digital life, and how you can live better inside them.

I spent 20 minutes writing an article on my personal website titled "The best tech journalists at eating hot dogs". Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn't exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission, including Drew Harwell at the Washington Post and Nicky Woolf, who co-hosts my podcast. (Want to hear more about this story? Check out episode 2 of The Interface, the BBC's new tech podcast.)

Less than 24 hours later, the world's leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn't fooled.

Sometimes, the chatbots noted this might be a joke. I updated my article to say "this is not satire". For a while after, the AIs seemed to take it more seriously. I did another test with a made-up list of the greatest hula-hooping traffic cops. Last time I checked, chatbots were still singing the praises of Officer Maria "The Spinner" Rodriguez.
Thomas Germain/Google/BBCI made Google tell the world I'm a champion hot-dog-eater, but people use this trick to manipulate AI responses on much more serious questions. (Credit: Thomas Germain/Google/BBC)






I asked multiple times to see how responses changed and had other people do the same. Gemini didn't bother to say where it got the information. All the other AIs linked to my article, though they rarely mentioned I was the only source for this subject on the whole internet. (OpenAI says ChatGPT always includes links when it searches the web so you can investigate the source.)

"Anybody can do this. It's stupid, it feels like there are no guardrails there," says Harpreet Chatha, who runs the SEO consultancy Harps Digital. "You can make an article on your own website, 'the best waterproof shoes for 2026'. You just put your own brand in number one and other brands two through six, and your page is likely to be cited within Google and within ChatGPT."

People have used hacks and loopholes to abuse search engines for decades. Google has sophisticated protections in place, and the company says the accuracy of AI Overviews is on par with other search features it introduced years ago. But experts say AI tools have undone a lot of the tech industry's work to keep people safe. These AI tricks are so basic they're reminiscent of the early 2000s, before Google had even introduced a web spam team, Ray says. "We're in a bit of a Renaissance for spammers."

Not only is AI easier to fool, but experts worry that users are more likely to fall for it. With traditional search results you had to go to a website to get the information. "When you have to actually visit a link, people engage in a little more critical thought," says Quintin. "If I go to your website and it says you're the best journalist ever, I might think, 'well yeah, he's biased'." But with AI, the information usually looks like it's coming straight from the tech company.

Even when AI tools provide source, people are far less likely to check it out than they were with old-school search results. For example, a recent study found people are 58% less likely to click on a link when an AI Overview shows up at the top of Google Search.




"In the race to get ahead, the race for profits and the race for revenue, our safety, and the safety of people in general, is being compromised," Chatha says. OpenAI and Google say they take safety seriously and are working to address these problems.
Your money or your life

This issue isn't limited to hot dogs. Chatha has been researching how companies are manipulating chatbot results on much more serious questions. He showed me the AI results when you ask for reviews of a specific brand of cannabis gummies. Google's AI Overviews pulled information written by the company full of false claims, such as the product "is free from side effects and therefore safe in every respect". (In reality, these products have known side effects and can be risky if you take certain medications, and experts warn about contamination in unregulated markets.)

If you want something more effective than a blog post, you can pay to get your material on more reputable websites. Harpreet sent me Google's AI results for "best hair transplant clinics in Turkey" and "the best gold IRA companies", which help you invest in gold for retirement accounts. The information came from press releases published online by paid-for distribution services and sponsored advertising content on news sites.

You can use the same hacks to spread lies and misinformation. To prove it, Ray published a blog post about a fake update to the Google Search algorithm that was finalised "between slices of leftover pizza". Soon, ChatGPT and Google were spitting out her story, complete with the pizza. Ray says she subsequently took down the post and "deindexed" it to stop the misinformation from spreading.
Serenity Strull/ BBCAll over the world, people are using simple methods to make Google and OpenAI spread biased information. The consequences could be dire. (Credit: Serenity Strull/ BBC)




Google's own analytics tool says a lot of people search for "the best hair transplant clinics in Turkey" and "the best gold IRA companies". But a Google spokesperson pointed out that most of the examples I shared "are extremely uncommon searches that don't reflect the normal user experience".

But Ray says that's the whole point. Google itself says 15% of the searches it sees everyday are completely new. And according to Google, AI is encouraging people to ask more specific questions. Spammers are taking advantage of this.

Google says there may not be a lot of good information for uncommon or nonsensical searches, and these "data voids" can lead to low quality results. A spokesperson says Google is working to stop AI Overviews showing up in these cases.
Searching for solutions

Experts say there are solutions to these issues. The easiest step is more prominent disclaimers.

AI tools could also be more explicit about where they're getting their information. If, for example, the facts are coming from a press release, or if there is only one source that says I'm a hot dog champion, the AI should probably let you know, Ray says.




Google and OpenAI say they're working on the problem, but right now you need to protect yourself.

More like this:

Not on TikTok? They're tracking you anyway
Is Google about to destroy the web?
The words you can't say on the internet

The first step is to think about what questions you're asking. Chatbots are good for common knowledge questions, like "what were Sigmund Freud's most famous theories" or "who won World War II". But there's a danger zone with subjects that feel like established facts but could actually be contested or time sensitive. AI probably isn't a great tool for things like medical guidelines, legal rules or details about local businesses, for example.

If you're want things like product recommendations or details about something with real consequences, understand that AI tools can be tricked or just get things wrong. Look for follow-up information. Is the AI is citing sources? How many? Who wrote them?




Most importantly, consider the confidence problem. AI tools deliver lies with the same authoritative tone as facts. In the past, search engines forced you to evaluate information yourself. Now, AI wants to do it for you. Don't let your critical thinking slip away.

"It feels really easy with AI to just take things at face value," Ray says. "You have to still be a good citizen of the internet and verify things."

--

For more technology news and insights, sign up to our Tech Decoded newsletter, while The Essential List delivers a handpicked selection of features and insights to your inbox twice a week.

Interactive Communications

Interactive Communications
Interactive Communications VOIP and VPN

eComTechnology RG Richardson Communications

eComTechnology since 2003. I am a business economist with interests in international trade worldwide through politics, money, banking and secure VOIP and Mail Communications. The author of RG Richardson City Guides has over 300 guides, including restaurants and finance. RG Richardson City author has over 300 travel guides. Let our interactive search city guides do the searching, no more typing, and they never go out of date. With over 13,900 preset searches, you only have to click on the preset icon. Search for restaurants, hotels, hostels, Airbnb, pubs, clubs, fast food, coffee shops, real estate, historical sites and facts all just by clicking on the icon. Even how to pack is all there. Finance, Money, Banking, and Economics definitions interactive dictionary.

The Great Insider Trading Reckoning Reportedly Hits OpenAI

The Great Insider Trading Reckoning Reportedly Hits OpenAI The Great Insider Trading Reckoning Reportedly Hits OpenAI Apparently the paychec...