Bias in neural networks is usually treated as a fixed constant.
This paper explores a minimal alternative: make bias a learnable, bounded contribution instead of an always-on offset. I introduce a Regulated Bias Neuron (RBN), where the bias term is scaled by a trainable gate:
y = \phi\left(\sum w_ix_i + \beta \cdot b\right), \quad \beta \in (0,1)
The goal isn’t to redefine intelligence or add training complexity, it’s to expose bias reliance as an observable internal signal and give the model structural control over when bias helps vs. when it dominates. Full analytical thesis attached as PDF. Interested in feedback from ML engineers and researchers who think about stability, interpretability, and minimal architectural changes.