Meet Jules’ sharpest critic and most valuable ally
AI coding agents like Jules can build, refactor, and scaffold code while you focus on other things. That convenience and ease of use is thrilling, but can sometimes also mean subtle bugs, missed edge cases, and untested assumptions.
That’s why we’re introducing a new capability in Jules that reviews and critiques code before you see it.
Here’s how it works:
1: You prompt Jules to start a task.
2: The critic feature reviews the candidate patch and its description in a single pass, making an overall judgement.
3: Jules responds to feedback before finishing, after which the critic can review the updated patch again and continue flagging anything it deems necessary until no further issues remain.
4: You receive code that’s already been internally reviewed.
This draws from actor-critic reinforcement learning (RL), where an “actor” generates and a “critic” evaluates. In RL, this loop updates the actor and critic based on a learning signal. In our LLM-as-critic setup, the pattern is similar: propose, then evaluate. However, instead of updating learning parameters, the feedback influences the current state and next steps. This same principle also underpins research on LLM-as-a-judge, where the critic’s evaluation guides quality without retraining.
6
6 comments
Marcio Pacheco
7
Meet Jules’ sharpest critic and most valuable ally
Data Alchemy
skool.com/data-alchemy
Your Community to Master the Fundamentals of Working with Data and AI — by Datalumina®
Leaderboard (30-day)
Powered by