Inductive bias is a buzzword in machine learning, and an important concept in general (the concept applies in many fields, they just don't necessarily call it by this name). It refers to the bias or assumptions a given learning method has and uses in making its inductions. So for example when given a lot of data points in categories, you might use the "nearest neighbor algorithm" to categorize a incoming unlabeled data point, which is to find the closest known example (by some measure similarity) and say it has the same label as that one. That method has a bias: it presumes that similar elements will have similar outcomes. That presumption may not be correct, but often it works out (the trick is to find a way of measuring similarity such that it's true).
People often shudder at hearing about "bias" in learning. They figure bias is somehow a bad thing, something to be avoided. I've heard that some biologists proudly declare that their parsimony model is "assumptionless." After all, if you're biased in your learning, you won't be able to induce some concepts, right? Well, yes, but that's not a bad thing. It's even better than a good thing. In fact, inductive bias is essential to learning. You cannot learn anything unless you have some inductive bias. Without some way of preferring one potential concept over another, all the infinitely many ones (literally) that fit the data at hand are equally viable, and you cannot induce that any one should be followed. The famous Ugly Duckling Theorem shows that under a completely unbiased measurement, everything in the universe is exactly as similar to and as different from everything else in the universe, so there's nothing you can infer. So we have the notion that some concepts or rules are more "natural" than others, in some circumstance or another. Like the idea that similar elements have similar outcomes, or parsimony, which is an example of Occam's Razor, which presumes that "simple" explanations (for some definition of "simple") are to be preferred.
So be proud of bias, at least here. Recognize that every conclusion you draw, consciously or unconsciously, is based on some pre-existing bias or preference you have as to what conclusions should be like. If you know it's there, you can better understand its effects on your thinking.