Move feature_scaling to the model layer
Created by: scarlehoff
I am creating the interpolator just once, including the call to the log function and everything (so the same one is always used across the code) and then compiling the pdf_model
with it such that any input given to the fitting/predict/etc functions
will be "scaled".
The same is not true, however, for the tensors. I didn't think it made sense to scale the tensors automatically because if one is creating a model with a scaler you have to have access to the scaler anyway. It also introduces a few extra (unnecessary) complications.
I've also "disconnected" the interpolation from the fact that the beta exponent is set to 1 because. I think it is an option interesting enough that it should be usable by itself (right now it gets activated automatically when the interpolation is used)
With respect to the name I've used for the feature scaling function (scaler
) feel free to suggest a different one and I'll change throughout.
@RoyStegeman is there a fit that was run with the latest version of the other PR? In order to ensure I haven't broken anything with these changes...
In summary:
- when creating the model you need to know whether you are scaling or not the input and provide the correct type of input.
- after model creation, the PDF is what you would expect from a PDF (i.e., PDF(x) produces the value of the PDF for x, with x \in (0,1)) but internally it gets scaled
In doing this I noticed a few problems in MetaModel, the first is that it made no sense having the apply_as_layer
and a parser for the tensors separated because they would always be used together.
There was also a parse_input(self.x_in)
in a few fitting functions that should actually be parse_input(x)
to be more consistent with tensorflow (even though we never give the input a fitting time)