Keras preprocessing layers make it easier to build end-to-end machine learning pipelines that handle raw text, numbers, categories, and images directly inside your model. This article walks through the available preprocessing layers, the adapt() method, and different strategies for placing preprocessing either inside the model or in the tf.data pipeline. You’ll also learn how to combine preprocessing with multi-worker training and export portable inference models, ensuring consistency, scalability, and better performance across environments.Keras preprocessing layers make it easier to build end-to-end machine learning pipelines that handle raw text, numbers, categories, and images directly inside your model. This article walks through the available preprocessing layers, the adapt() method, and different strategies for placing preprocessing either inside the model or in the tf.data pipeline. You’ll also learn how to combine preprocessing with multi-worker training and export portable inference models, ensuring consistency, scalability, and better performance across environments.

Beginner’s Guide to Keras Preprocessing Layers

2025/09/19 04:25
8 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Content Overview

  • Keras preprocessing
  • Available preprocessing
  • Text preprocessing
  • Numerical features preprocessing
  • Categorical features preprocessing
  • Image preprocessing
  • Image data augmentation
  • The adapt() method
  • Preprocessing data before the model or inside the model
  • Benefits of doing preprocessing inside the model at inference time
  • Preprocessing during multi-worker training

\

Keras preprocessing

The Keras preprocessing layers API allows developers to build Keras-native input processing pipelines. These input processing pipelines can be used as independent preprocessing code in non-Keras workflows, combined directly with Keras models, and exported as part of a Keras SavedModel.

With Keras preprocessing layers, you can build and export models that are truly end-to-end: models that accept raw images or raw structured data as input; models that handle feature normalization or feature value indexing on their own.

Available preprocessing

Text preprocessing

  • tf.keras.layers.TextVectorization: turns raw strings into an encoded representation that can be read by an Embedding layer or Dense layer.

Numerical features preprocessing

  • tf.keras.layers.Normalization: performs feature-wise normalization of input features.
  • tf.keras.layers.Discretization: turns continuous numerical features into integer categorical features.

Categorical features preprocessing

  • tf.keras.layers.CategoryEncoding: turns integer categorical features into one-hot, multi-hot, or count dense representations.
  • tf.keras.layers.Hashing: performs categorical feature hashing, also known as the "hashing trick".
  • tf.keras.layers.StringLookup: turns string categorical values into an encoded representation that can be read by an Embedding layer or Dense layer.
  • tf.keras.layers.IntegerLookup: turns integer categorical values into an encoded representation that can be read by an Embedding layer or Dense layer.

Image preprocessing

These layers are for standardizing the inputs of an image model.

  • tf.keras.layers.Resizing: resizes a batch of images to a target size.
  • tf.keras.layers.Rescaling: rescales and offsets the values of a batch of images (e.g. go from inputs in the [0, 255] range to inputs in the [0, 1] range.
  • tf.keras.layers.CenterCrop: returns a center crop of a batch of images.

Image data augmentation

These layers apply random augmentation transforms to a batch of images. They are only active during training.

  • tf.keras.layers.RandomCrop
  • tf.keras.layers.RandomFlip
  • tf.keras.layers.RandomTranslation
  • tf.keras.layers.RandomRotation
  • tf.keras.layers.RandomZoom
  • tf.keras.layers.RandomContrast

The adapt() method

Some preprocessing layers have an internal state that can be computed based on a sample of the training data. The list of stateful preprocessing layers is:

  • TextVectorization: holds a mapping between string tokens and integer indices
  • StringLookup and IntegerLookup: hold a mapping between input values and integer indices.
  • Normalization: holds the mean and standard deviation of the features.
  • Discretization: holds information about value bucket boundaries.

Crucially, these layers are non-trainable. Their state is not set during training; it must be set before training, either by initializing them from a precomputed constant, or by "adapting" them on data.

You set the state of a preprocessing layer by exposing it to training data, via the adapt() method:

\

import numpy as np import tensorflow as tf from tensorflow import keras from keras import layers  data = np.array(     [         [0.1, 0.2, 0.3],         [0.8, 0.9, 1.0],         [1.5, 1.6, 1.7],     ] ) layer = layers.Normalization() layer.adapt(data) normalized_data = layer(data)  print("Features mean: %.2f" % (normalized_data.numpy().mean())) print("Features std: %.2f" % (normalized_data.numpy().std())) 

\

Features mean: -0.00 Features std: 1.00 

The adapt() method takes either a Numpy array or a tf.data.Dataset object. In the case of StringLookup and TextVectorization, you can also pass a list of strings:

\

data = [     "ξεῖν᾽, ἦ τοι μὲν ὄνειροι ἀμήχανοι ἀκριτόμυθοι",     "γίγνοντ᾽, οὐδέ τι πάντα τελείεται ἀνθρώποισι.",     "δοιαὶ γάρ τε πύλαι ἀμενηνῶν εἰσὶν ὀνείρων:",     "αἱ μὲν γὰρ κεράεσσι τετεύχαται, αἱ δ᾽ ἐλέφαντι:",     "τῶν οἳ μέν κ᾽ ἔλθωσι διὰ πριστοῦ ἐλέφαντος,",     "οἵ ῥ᾽ ἐλεφαίρονται, ἔπε᾽ ἀκράαντα φέροντες:",     "οἱ δὲ διὰ ξεστῶν κεράων ἔλθωσι θύραζε,",     "οἵ ῥ᾽ ἔτυμα κραίνουσι, βροτῶν ὅτε κέν τις ἴδηται.", ] layer = layers.TextVectorization() layer.adapt(data) vectorized_text = layer(data) print(vectorized_text) 

\

tf.Tensor( [[37 12 25  5  9 20 21  0  0]  [51 34 27 33 29 18  0  0  0]  [49 52 30 31 19 46 10  0  0]  [ 7  5 50 43 28  7 47 17  0]  [24 35 39 40  3  6 32 16  0]  [ 4  2 15 14 22 23  0  0  0]  [36 48  6 38 42  3 45  0  0]  [ 4  2 13 41 53  8 44 26 11]], shape=(8, 9), dtype=int64) 

In addition, adaptable layers always expose an option to directly set state via constructor arguments or weight assignment. If the intended state values are known at layer construction time, or are calculated outside of the adapt() call, they can be set without relying on the layer's internal computation. For instance, if external vocabulary files for the TextVectorizationStringLookup, or IntegerLookup layers already exist, those can be loaded directly into the lookup tables by passing a path to the vocabulary file in the layer's constructor arguments.

Here's an example where you instantiate a StringLookup layer with precomputed vocabulary:

\

vocab = ["a", "b", "c", "d"] data = tf.constant([["a", "c", "d"], ["d", "z", "b"]]) layer = layers.StringLookup(vocabulary=vocab) vectorized_data = layer(data) print(vectorized_data) 

\

tf.Tensor( [[1 3 4]  [4 0 2]], shape=(2, 3), dtype=int64) 

Preprocessing data before the model or inside the model

There are two ways you could be using preprocessing layers:

Option 1: Make them part of the model, like this:

\

inputs = keras.Input(shape=input_shape) x = preprocessing_layer(inputs) outputs = rest_of_the_model(x) model = keras.Model(inputs, outputs) 

With this option, preprocessing will happen on device, synchronously with the rest of the model execution, meaning that it will benefit from GPU acceleration. If you're training on a GPU, this is the best option for the Normalization layer, and for all image preprocessing and data augmentation layers.

Option 2: apply it to your tf.data.Dataset, so as to obtain a dataset that yields batches of preprocessed data, like this:

\

dataset = dataset.map(lambda x, y: (preprocessing_layer(x), y)) 

With this option, your preprocessing will happen on a CPU, asynchronously, and will be buffered before going into the model. In addition, if you call dataset.prefetch(tf.data.AUTOTUNE) on your dataset, the preprocessing will happen efficiently in parallel with training:

\

dataset = dataset.map(lambda x, y: (preprocessing_layer(x), y)) dataset = dataset.prefetch(tf.data.AUTOTUNE) model.fit(dataset, ...) 

This is the best option for TextVectorization, and all structured data preprocessing layers. It can also be a good option if you're training on a CPU and you use image preprocessing layers.

Note that the TextVectorization layer can only be executed on a CPU, as it is mostly a dictionary lookup operation. Therefore, if you are training your model on a GPU or a TPU, you should put the TextVectorization layer in the tf.data pipeline to get the best performance.

When running on a TPU, you should always place preprocessing layers in the tf.data pipeline (with the exception of Normalization and Rescaling, which run fine on a TPU and are commonly used as the first layer in an image model).

Benefits of doing preprocessing inside the model at inference time

Even if you go with option 2, you may later want to export an inference-only end-to-end model that will include the preprocessing layers. The key benefit to doing this is that it makes your model portable and it helps reduce the training/serving skew.

When all data preprocessing is part of the model, other people can load and use your model without having to be aware of how each feature is expected to be encoded & normalized. Your inference model will be able to process raw images or raw structured data, and will not require users of the model to be aware of the details of e.g. the tokenization scheme used for text, the indexing scheme used for categorical features, whether image pixel values are normalized to [-1, +1] or to [0, 1], etc. This is especially powerful if you're exporting your model to another runtime, such as TensorFlow.js: you won't have to reimplement your preprocessing pipeline in JavaScript.

If you initially put your preprocessing layers in your tf.data pipeline, you can export an inference model that packages the preprocessing. Simply instantiate a new model that chains your preprocessing layers and your training model:

\

inputs = keras.Input(shape=input_shape) x = preprocessing_layer(inputs) outputs = training_model(x) inference_model = keras.Model(inputs, outputs) 

Preprocessing during multi-worker training

Preprocessing layers are compatible with the tf.distribute API for running training across multiple machines.

In general, preprocessing layers should be placed inside a tf.distribute.Strategy.scope() and called either inside or before the model as discussed above.

\

with strategy.scope():     inputs = keras.Input(shape=input_shape)     preprocessing_layer = tf.keras.layers.Hashing(10)     dense_layer = tf.keras.layers.Dense(16) 

For more details, refer to the Data preprocessing section of the Distributed input tutorial.

\ \

:::info Originally published on the TensorFlow website, this article appears here under a new headline and is licensed under CC BY 4.0. Code samples shared under the Apache 2.0 License.

:::

\

Market Opportunity
Brainedge Logo
Brainedge Price(LEARN)
$0.006692
$0.006692$0.006692
+1.53%
USD
Brainedge (LEARN) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

SBI VC Trade Launches Ripple’s RLUSD in Japan

SBI VC Trade Launches Ripple’s RLUSD in Japan

The post SBI VC Trade Launches Ripple’s RLUSD in Japan appeared on BitcoinEthereumNews.com. Japan Unleashes RLUSD: SBI VC Trade Flips the Switch on Ripple’s Stablecoin
Share
BitcoinEthereumNews2026/04/01 01:29
One Of Frank Sinatra’s Most Famous Albums Is Back In The Spotlight

One Of Frank Sinatra’s Most Famous Albums Is Back In The Spotlight

The post One Of Frank Sinatra’s Most Famous Albums Is Back In The Spotlight appeared on BitcoinEthereumNews.com. Frank Sinatra’s The World We Knew returns to the Jazz Albums and Traditional Jazz Albums charts, showing continued demand for his timeless music. Frank Sinatra performs on his TV special Frank Sinatra: A Man and his Music Bettmann Archive These days on the Billboard charts, Frank Sinatra’s music can always be found on the jazz-specific rankings. While the art he created when he was still working was pop at the time, and later classified as traditional pop, there is no such list for the latter format in America, and so his throwback projects and cuts appear on jazz lists instead. It’s on those charts where Sinatra rebounds this week, and one of his popular projects returns not to one, but two tallies at the same time, helping him increase the total amount of real estate he owns at the moment. Frank Sinatra’s The World We Knew Returns Sinatra’s The World We Knew is a top performer again, if only on the jazz lists. That set rebounds to No. 15 on the Traditional Jazz Albums chart and comes in at No. 20 on the all-encompassing Jazz Albums ranking after not appearing on either roster just last frame. The World We Knew’s All-Time Highs The World We Knew returns close to its all-time peak on both of those rosters. Sinatra’s classic has peaked at No. 11 on the Traditional Jazz Albums chart, just missing out on becoming another top 10 for the crooner. The set climbed all the way to No. 15 on the Jazz Albums tally and has now spent just under two months on the rosters. Frank Sinatra’s Album With Classic Hits Sinatra released The World We Knew in the summer of 1967. The title track, which on the album is actually known as “The World We Knew (Over and…
Share
BitcoinEthereumNews2025/09/18 00:02
CME Group to launch Solana and XRP futures options in October

CME Group to launch Solana and XRP futures options in October

The post CME Group to launch Solana and XRP futures options in October appeared on BitcoinEthereumNews.com. CME Group is preparing to launch options on SOL and XRP futures next month, giving traders new ways to manage exposure to the two assets.  The contracts are set to go live on October 13, pending regulatory approval, and will come in both standard and micro sizes with expiries offered daily, monthly and quarterly. The new listings mark a major step for CME, which first brought bitcoin futures to market in 2017 and added ether contracts in 2021. Solana and XRP futures have quickly gained traction since their debut earlier this year. CME says more than 540,000 Solana contracts (worth about $22.3 billion), and 370,000 XRP contracts (worth $16.2 billion), have already been traded. Both products hit record trading activity and open interest in August. Market makers including Cumberland and FalconX plan to support the new contracts, arguing that institutional investors want hedging tools beyond bitcoin and ether. CME’s move also highlights the growing demand for regulated ways to access a broader set of digital assets. The launch, which still needs the green light from regulators, follows the end of XRP’s years-long legal fight with the US Securities and Exchange Commission. A federal court ruling in 2023 found that institutional sales of XRP violated securities laws, but programmatic exchange sales did not. The case officially closed in August 2025 after Ripple agreed to pay a $125 million fine, removing one of the biggest uncertainties hanging over the token. This is a developing story. This article was generated with the assistance of AI and reviewed by editor Jeffrey Albus before publication. Get the news in your inbox. Explore Blockworks newsletters: Source: https://blockworks.co/news/cme-group-solana-xrp-futures
Share
BitcoinEthereumNews2025/09/17 23:55