What is the Standardscaler() function of Sklearn

The code below I found in the link Classifying the Iris Data Set with Keras . And I would like to understand what the utility of StandardScaler () , says it is important for convergence?

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, StandardScaler

iris = load_iris()
X = iris['data']
y = iris['target']
names = iris['target_names']
feature_names = iris['feature_names']

# One hot encoding
enc = OneHotEncoder()
Y = enc.fit_transform(y[:, np.newaxis]).toarray()

# Scale data to have mean 0 and variance 1 
# which is importance for convergence of the neural network
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Split the data set into training and testing
X_train, X_test, Y_train, Y_test = train_test_split(
    X_scaled, Y, test_size=0.5, random_state=2)

n_features = X.shape[1]
n_classes = Y.shape[1]
Author: Ronaldo Souza, 2019-12-08

1 answers

StandardScalerque implements the TransformerAPI to calculate the mean and standard deviation in a training set, so that it can subsequently reapply the same transformation in the test set. Therefore, this class is suitable for use in the initial stages of a sklearn.pipeline.Pipeline you can read all the documentation for this library on this website: https://scikit-learn.org/stable/modules/preprocessing.html

 1
Author: Pedro Geek, 2019-12-08 20:31:30