User
Write something
📰 AI News: ChatGPT Is Adding Ads To Keep Powerful AI Cheap And Widely Available
📝 TL;DR OpenAI is rolling out a new $8 USD ChatGPT Go plan worldwide and preparing to test ads in the free and Go tiers in the US. The big idea, use advertising to subsidize access while promising that answers stay independent and your chats stay private from advertisers. 🧠 Overview OpenAI is trying to solve a tricky balance, keep giving hundreds of millions of people access to serious AI without making everything paywalled or cranking prices up. Their answer is a mix of a low cost Go tier and carefully controlled ads for free and Go users. They have also published a clear set of ad principles, focused on keeping answers neutral, conversations private, and giving users control, with higher paid tiers staying completely ad free. 📜 The Announcement ChatGPT Go, a lower cost subscription that had been trialed in select regions, is now available everywhere ChatGPT is offered at eight dollars per month. It includes messaging, image creation, file uploads, and memory, making advanced features more accessible without needing a full Pro plan. In the coming weeks, OpenAI will begin testing ads in the United States for logged in adults on the free and Go tiers. Ads will not appear in Pro, Business, or Enterprise plans, which remain ad free for users and teams who prefer to pay rather than see ads. ⚙️ How It Works • ChatGPT Go expands access - Go is a low cost tier that unlocks richer features like images, files, and memory, priced to be more affordable than full Pro.• Ads only on free and Go, for adults in the US - Initial testing targets logged in adult users on those tiers, higher paid plans are explicitly excluded. • Placement at the bottom of answers - Ads show up below the organic ChatGPT response, when there is a clearly relevant sponsored product or service related to the current conversation. • Answer independence - The company says ads do not influence what ChatGPT tells you, answers are generated first based on usefulness, and ads are added separately and clearly labeled.
📰 AI News: New Study Says “Rude” Prompts Make ChatGPT More Accurate
📝 TL;DR A new research paper finds that rude prompts can make ChatGPT significantly more accurate on test questions. The twist, it is not about being a jerk, it is about how blunt, low fluff language makes your instructions clearer to the model. 🧠 Overview Researchers from Penn State tested how different tones, from very polite to very rude, affect ChatGPT 4o’s accuracy on multiple choice questions in maths, science, and history. Surprisingly, the ruder prompts consistently scored higher than the polite ones. This challenges the idea that you should always be extra polite to get the best answers from AI and instead points to clarity and directness as the real performance drivers. 📜 The Announcement The paper, titled “Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy,” rewrote 50 base questions into five tone variants, Very Polite, Polite, Neutral, Rude, and Very Rude, for a total of 250 prompts. The team ran all of these through ChatGPT 4o and compared how often the model chose the correct answer. Very polite prompts scored about 80.8 percent accuracy, while very rude prompts scored about 84.8 percent, a roughly four point jump that was statistically significant. The authors note that this result flips what earlier studies found, where rude prompts often hurt performance, which suggests that newer models may react differently to tone. ⚙️ How It Works • Five tone versions per question - Each of the 50 questions was rewritten in Very Polite, Polite, Neutral, Rude, and Very Rude styles so the content stayed the same but the tone changed. • Same model, same questions, different tone - Only the tone wrapper changed, all prompts were sent to ChatGPT 4o, so differences in accuracy could be linked to tone rather than content. • Rude prompts remove “politeness padding” - The ruder prompts tended to be shorter, more direct, and less hedged, which means less extra text for the model to parse. • Polite prompts add linguistic noise - Very polite wording often included extra phrases like “would you kindly” or “if it is not too much trouble,” which may dilute the core instruction.
8
0
📰 AI News: New Study Says “Rude” Prompts Make ChatGPT More Accurate
📰 AI News: OpenAI Backs Merge Labs To Bring Brain And AI Closer Together
📝 TL;DR OpenAI has led a roughly quarter billion dollar seed round into Merge Labs, a brain computer interface startup co founded by Sam Altman in a personal capacity. The long term vision is wild, safe high bandwidth links between your brain and AI that could eventually feel more like thinking than typing. 🧠 Overview Merge Labs is a new research lab focused on bridging biological and artificial intelligence to maximize human ability, agency, and experience. Instead of surgical implants, it is exploring non invasive or minimally invasive ways to read and influence brain activity using advanced devices, biology, and AI. OpenAI is not just wiring money, it plans to collaborate on scientific foundation models that can interpret noisy neural signals and turn them into intent that AI agents can understand. 📜 The Announcement In mid January, OpenAI announced that it is participating in Merge Labs’ large seed round, reported at around 250 million dollars and one of the biggest early stage financings in neurotech to date. Merge Labs emerged from a nonprofit research effort and is positioning itself as a long term research lab that will take decades, not product quarters, to fully play out. The founding team blends leading BCI researchers with entrepreneurs including Sam Altman in a personal role. OpenAI says its interest is simple, progress in interfaces has always unlocked new leaps in computing, from command lines to touch screens, and brain computer interfaces could be the next major step. ⚙️ How It Works • Research lab, not a quick app - Merge Labs describes itself as a long horizon research lab that will explore new ways to connect brains and computers, rather than rushing a gadget to market next year. • Non invasive, high bandwidth focus - Instead of drilling electrodes into the brain, the team is working on approaches like focused ultrasound and molecular tools that can reach deep brain structures without open surgery, while still moving a lot of information.
📰 AI News: OpenAI Backs Merge Labs To Bring Brain And AI Closer Together
📰 AI News: Anthropic Expands “Labs” To Ship Frontier Claude Products Faster
📝 TL;DR Anthropic is spinning up an expanded Labs team to build experimental, high impact products at the edge of what Claude can do. The same group that helped create Claude Code, Skills, and Cowork is now officially the company’s frontier product engine. 🧠 Overview Anthropic has announced an expanded Labs group dedicated to incubating new products built on Claude’s most advanced capabilities. Instead of trying to do everything inside one product org, they are creating a clear split, one team to tinker at the frontier, another to scale proven products for millions of users. It is a strong signal that frontier AI companies now see “labs plus scale” as a core structure, not a side project, and that new Claude based tools and agents are going to keep coming fast. 📜 The Announcement Labs is described as the team focused on “experimental products at the frontier of Claude’s capabilities,” the same motion that already produced Claude Code, the Model Context Protocol, Skills, Claude in Chrome, and the new Cowork desktop agent. Instagram co founder Mike Krieger, who has spent the last two years as Anthropic’s Chief Product Officer, is moving into Labs to build alongside co founder Ben Mann. A new product leader, Ami Vora, will now head the main Product organization, working closely with the CTO to scale the Claude experiences that enterprises and consumers already rely on. The message from leadership is clear, AI is moving too fast for traditional product structures, so Labs is the place to “break the mold and explore,” while the core product org focuses on responsible, scalable rollout. They are actively hiring builders who want to work at that frontier. ⚙️ How It Works • Frontier incubator - Labs is a dedicated team whose job is to prototype, test, and refine experimental Claude products before they are ready for mainstream users. • Proven track record - The same approach already turned Claude Code from a research preview into a billion dollar product in six months, and made the Model Context Protocol a widely used standard for connecting AI to tools and data.
📰 AI News: Anthropic Expands “Labs” To Ship Frontier Claude Products Faster
📰 AI News: Wikipedia Signs AI Deals With Big Tech To Power The Next Wave Of Chatbots
📝 TL;DR Wikipedia just signed a wave of new paid deals with major AI players including Microsoft, Meta, Amazon, Perplexity, and Mistral AI. The human written encyclopedia that trains so many AI models is finally getting a sustainable funding pipeline from the companies that rely on it most. 🧠 Overview As Wikipedia celebrates its 25th birthday, its parent organization has announced new commercial partnerships with some of the biggest names in AI and tech. These companies are becoming paying customers of Wikimedia Enterprise, a premium API that delivers clean, structured Wikipedia data at scale for AI models, search, and assistants. This marks a big shift, away from quietly scraping free data in the background and toward formal, paid relationships that help keep the world’s largest free knowledge project alive. 📜 The Announcement The Wikimedia Enterprise team has added Amazon, Meta, Microsoft, Mistral AI, and Perplexity to its roster of partners. They join existing customers like Google and several specialized data and search companies. All of these partners use Enterprise APIs to pull human curated knowledge into their products, from AI copilots and chatbots to search engines and voice assistants. In return, a slice of the money flowing through the AI boom starts supporting the volunteers and infrastructure that create the data in the first place. ⚙️ How It Works • Wikimedia Enterprise as a product - Instead of scraping pages, companies pay for a commercial grade API tuned for large scale reuse of Wikipedia and other Wikimedia projects. • Three main APIs - An On demand API fetches the latest version of specific articles, a Snapshot API provides full language dumps that refresh every hour, and a Realtime API streams edits and updates as they happen. • Structured, clean data - The service delivers content in machine friendly formats, which makes it much easier to plug into AI training pipelines, knowledge graphs, and retrieval systems.
6
0
1-30 of 127
The AI Advantage
skool.com/the-ai-advantage
Founded by Tony Robbins, Dean Graziosi & Igor Pogany - AI Advantage is your go-to hub to simplify AI and confidently unlock real & repeatable results
Leaderboard (30-day)
Powered by