Biased Neuron I'm Neural Nets
Bias in neural networks is usually treated as a fixed constant.
This paper explores a minimal alternative: make bias a learnable, bounded contribution instead of an always-on offset. I introduce a Regulated Bias Neuron (RBN), where the bias term is scaled by a trainable gate:
y = \phi\left(\sum w_ix_i + \beta \cdot b\right), \quad \beta \in (0,1)
The goal isn’t to redefine intelligence or add training complexity, it’s to expose bias reliance as an observable internal signal and give the model structural control over when bias helps vs. when it dominates. Full analytical thesis attached as PDF. Interested in feedback from ML engineers and researchers who think about stability, interpretability, and minimal architectural changes.
2
1 comment
Richard Brown
4
Biased Neuron I'm Neural Nets
The AI Advantage
skool.com/the-ai-advantage
Founded by Tony Robbins, Dean Graziosi & Igor Pogany - AI Advantage is your go-to hub to simplify AI and confidently unlock real & repeatable results
Leaderboard (30-day)
Powered by