Activity
Mon
Wed
Fri
Sun
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
What is this?
Less
More

Memberships

Ignite - Your LinkedIn Growth

76 members • $197/month

AI Automation Society

239.8k members • Free

HDS & AI ACADEMY

193 members • $1/month

The LinkedIn Boardroom

564 members • Free

The AI Advantage

70.6k members • Free

64 contributions to The AI Advantage
3 things I do every weekend to set up my week
I’ve learned this the hard way. If you wait until Monday to get focused, you’re already behind. Here’s how I set up my week before it starts: 1. I choose ONE win that mattersNot a to-do list. Not busy work. One outcome that actually moves my life or business forward. That goes on the calendar first. 2. I remove friction ahead of time I look at my week and ask,“What’s going to trip me up?” Too many meetings, distractions, low-energy days. I fix it now so I’m not relying on willpower later. 3. I reset my environment Desk clear. Calendar clean. Priorities visible. When Monday hits, I don’t want to think... I want to execute. This isn’t about discipline. It’s about design. Winning weeks are built before they begin. What about you? What’s the ONE thing you do to set yourself up to win the week ahead? Drop it below 👇
0 likes • 12h
My weekend move is a 15 minute weekly brief: one outcome, the 3 actions that create it, what could derail it, and the calendar blocks to protect it. Then I load those actions into Todoist as a short “This Week” list, pick the Monday top 3, and clear the noise so Monday is execution, not decisions.
📰 AI News: New Study Says “Rude” Prompts Make ChatGPT More Accurate
📝 TL;DR A new research paper finds that rude prompts can make ChatGPT significantly more accurate on test questions. The twist, it is not about being a jerk, it is about how blunt, low fluff language makes your instructions clearer to the model. 🧠 Overview Researchers from Penn State tested how different tones, from very polite to very rude, affect ChatGPT 4o’s accuracy on multiple choice questions in maths, science, and history. Surprisingly, the ruder prompts consistently scored higher than the polite ones. This challenges the idea that you should always be extra polite to get the best answers from AI and instead points to clarity and directness as the real performance drivers. 📜 The Announcement The paper, titled “Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy,” rewrote 50 base questions into five tone variants, Very Polite, Polite, Neutral, Rude, and Very Rude, for a total of 250 prompts. The team ran all of these through ChatGPT 4o and compared how often the model chose the correct answer. Very polite prompts scored about 80.8 percent accuracy, while very rude prompts scored about 84.8 percent, a roughly four point jump that was statistically significant. The authors note that this result flips what earlier studies found, where rude prompts often hurt performance, which suggests that newer models may react differently to tone. ⚙️ How It Works • Five tone versions per question - Each of the 50 questions was rewritten in Very Polite, Polite, Neutral, Rude, and Very Rude styles so the content stayed the same but the tone changed. • Same model, same questions, different tone - Only the tone wrapper changed, all prompts were sent to ChatGPT 4o, so differences in accuracy could be linked to tone rather than content. • Rude prompts remove “politeness padding” - The ruder prompts tended to be shorter, more direct, and less hedged, which means less extra text for the model to parse. • Polite prompts add linguistic noise - Very polite wording often included extra phrases like “would you kindly” or “if it is not too much trouble,” which may dilute the core instruction.
📰 AI News: New Study Says “Rude” Prompts Make ChatGPT More Accurate
0 likes • 13h
This tracks with what I’ve seen, the gain is not rudeness, it’s clarity. If you want the benefit without the bad habit, write prompts in CRTFT-C: - Context: What’s happening and why it matters. - Role: Who the model is acting as. CEO, copywriter, graphic designer - Task: Specific deliverable. - Format: Exact structure and length. - Tone: What you expect in the output. Professional, Blunt, Clear, Polite - Constraints: Sources, do and do not, and edge cases. That keeps outputs accurate and consistent, and keeps your mindset and communication clean.
Why Most Ads Fail Before the Targeting Even Matters
Most people don’t struggle with ads because they don’t understand targeting. They struggle because they can’t consistently produce ad creatives that stop the scroll. The message might be right.The offer might be solid.The audience might be perfectly defined. But if the visual doesn’t catch attention in the first second, none of that matters. And that’s where most ads fail. ---------- THE REAL PAIN ---------- Creating ad creatives is deceptively hard. Every ad needs to do multiple jobs at once. It has to interrupt attention, communicate value instantly, look credible, and feel intentional. That’s a tall order for a single image or graphic that often gets judged in under a second. For most people, this turns ad creation into a slow, frustrating process. You overthink layouts, second-guess visuals, tweak endlessly, or default to templates that feel generic. The result is usually “good enough,” but rarely compelling. And when ads don’t perform, it’s hard to know whether the problem is the copy, the targeting, or the creative itself. ---------- WHY AD CREATIVES ARE THE BOTTLENECK ---------- In practice, ad performance is often capped by visuals long before it’s capped by strategy. Platforms reward engagement. Engagement starts with attention. And attention is almost entirely visual at the first touchpoint. If the creative doesn’t earn that moment, the algorithm never gives the ad a chance. This is why great offers with weak visuals fail, while average offers with strong creatives sometimes win. The creative is the gateway. Everything else sits behind it. When ad creatives are hard to produce, testing slows down. Fewer variations get launched. Learning cycles stretch. Performance stagnates. ---------- THE HIDDEN COST OF WEAK VISUALS ---------- Weak visuals don’t just lower click-through rates. They undermine trust. People associate visual quality with legitimacy. Ads that look rushed, inconsistent, or generic trigger skepticism, even if the product is good. Subconsciously, viewers ask, “If they didn’t care about this, what else didn’t they care about?”
Why Most Ads Fail Before the Targeting Even Matters
0 likes • 13h
This nails it. Creative sets the ceiling because it controls the first second, attention and trust. The practical move is building a repeatable creative testing system, hooks, formats, and variants, so you can iterate weekly without burning out. What are your top 3 “stop scroll” patterns right now?
📰 AI News: Notion Quietly Tests Custom MCPs, Workers, And A Computer Use Agent
📝 TL;DR Notion is quietly testing a big upgrade to its AI Agents platform, including custom MCP tools, background Workers, and a Computer Use style agent that can control other apps. In plain terms, Notion is trying to turn your workspace into an automation hub, not just a note app. 🧠 Overview New leaks from Notion’s internal builds show the company expanding its agent platform well beyond simple chat inside a page. The experiments include support for custom MCPs, worker style automations, new external connectors, and an agent that can operate your computer or browser for you. If these features ship, Notion shifts from “AI that writes text” to “AI that runs workflows,” which is a big deal for solo operators and teams already living in Notion every day. 📜 The Announcement TestingCatalog spotted an unreleased Notion build labeled “Notion testing custom MCPs, Workers, and Computer Use agents,” along with copy that spells out the strategy, Notion is expanding its custom agent platform with new connectors, custom MCPs, and new AI tools, positioning itself as an automation hub. None of this is officially announced yet, so it is still experimental and may change, but it lines up with Notion’s recent push into AI first workspaces and custom agents that can work across your docs, tasks, and databases. ⚙️ How It Works • Custom MCPs - Notion appears to be adding support for custom Model Context Protocol tools so teams can expose internal APIs, databases, and services directly to their Notion agents. • Workers for background jobs - A new Workers concept suggests longer running or scheduled automations that can be triggered from Notion pages, databases, or events without you clicking a button each time. • Computer Use agent - The Computer Use style agent is designed to control apps that do not have clean APIs, for example by driving a browser or desktop UI so agents can complete tasks end to end.
📰 AI News: Notion Quietly Tests Custom MCPs, Workers, And A Computer Use Agent
0 likes • 13h
This is the real shift, Notion moving from notes to an automation control plane. If Workers and computer use ship, the win will be governance, permissions, logs, and clean databases.
📰 AI News: Google’s Stitch MCP Server Gives Coding Agents “X-Ray Vision” For Your UI
📝 TL;DR Google’s new Stitch MCP Server plugs your design tool straight into your IDE so AI agents can see real screens, not hallucinate them. Your coding agent can now generate new UI, fetch production ready code from existing designs, and keep everything visually consistent without leaving your editor. 🧠 Overview Stitch started as Google’s AI design tool that turns prompts into responsive UIs and front end code. Now it is stepping out of the browser and into your dev workflow through a fully managed MCP (Model Context Protocol) server. This bridge means coding agents inside tools like Gemini CLI or MCP aware IDEs can pull live designs from Stitch, understand their structure, and generate new screens that actually match your app, not some generic template. 📜 The Announcement Google quietly shipped a Stitch MCP Server and companion extensions that let AI agents talk directly to your Stitch projects. Early examples show developers chatting with an agent inside their IDE that can list Stitch projects, open specific screens, pull HTML and assets, and spin up new designs on demand. The headline from Google’s own promo is clear, generate new screens without leaving your IDE, fetch code from any design, and inject context so your agent has full visual awareness of your app’s UI. It turns Stitch from a separate design playground into a first class part of your coding environment. ⚙️ How It Works • Stitch MCP Server as the bridge - The server exposes your Stitch projects, screens, and assets over the Model Context Protocol, so any MCP enabled agent can talk to it securely. • Direct access from your IDE - Using clients like Gemini CLI or IDE integrations, you can ask your agent things like list my Stitch projects or grab the HTML for the checkout screen and it pulls the real artifacts, not guesses. • Generate screens from inside chat - You can stay in your editor, describe a new page in plain language, and have the agent call Stitch to generate a matching design and front end code.
0 likes • 13h
Interesting update. If agents can pull real UI context instead of guessing, that should reduce rework and keep screens consistent. Curious how easy setup is for the average team, and whether it plays nicely with existing design systems and workflows.
1-10 of 64
Joseph Terrell
5
17points to level up
@joseph-terrell-6862
Ops Manager at Dynamic Wealth Group IT expert w/ 25+ yrs enhancing efficiency, productivity, & ops. LinkedIn: https://www.linkedin.com/in/joeterrell1/

Active 9h ago
Joined Nov 6, 2025
Northern Illinois
Powered by