First of all, my question is related to ML.NET but I'm also curious if there are technics available in other frameworks if ML.NET does not allow it.
I have an input vector of floats containing 220 items and a float label which is a value in the range of -1 .. 1 as training data. I have hundreds of thousands of rows of the data I run through the learning pipeline. So far, I tried only fast tree algorithm. One thing I'd like to ask what algorithm do you think will be the best for this type of data?
Then I create a prediction engine, pass vectors of 220 float values and obtain predicted values. In most cases the predicted value is within the range of -1 .. 1 as I expect but rather often the value can be 1.23 or -1.12 or any other value outside of that range but I didn't see a value over 1.5 or -1.5 yet. Is there a way to specify in the prediction engine or in the model that I train that the predicted value must be in the range of -1 .. 1?