Personalization refers to the ability to tailor the user experience to each individual user. Personalization in modern applications is essential to customer satisfaction, engagement, sales, marketing, and branding success. The power of personalization is used across a wide range of fields:
- Tailoring the ads each user sees on a website based on the individual’s past engagements
- Ranking accommodation options when booking travel customized to each individual’s past booking behavior
- Detecting and defending against fraud that can identify the correct anomalies based on personal spending habits
Personalization, obviously, goes beyond merely acting upon static attributes such as gender, age, and so on. When done well, personalization takes into account a massive number of data points to customize an experience—based on everything known about a particular user—from their demographics, preferences, geo-location, interests, and past habits, all the way to their most recent behavior a few seconds earlier.
In fact, personalization has become so ubiquitous in today’s data-driven world that consumers simply expect fine-tuned personalization to be there. Users flock to more engaging experiences when consuming news content, where they can experience personalized “timelines” in order of relevance to their recent interests and engagements. Consumers switch to payment methods with smart payment processing that is resourceful in catching fraudulent activity in real-time, before the perpetrator walks away with approved payment.
Developing great personalized experiences isn’t easy, but is necessary. Finding creative, new ways to anticipate user wants and needs is more than just a fun diversion; it’s a competitive necessity for retaining customers—and relevance. It’s also an opportunity to increase loyalty, engagement, and, ultimately, revenue.
Essential Capabilities for Personalization Applications
To build an effective modern personalization experience, there are a few capabilities that are critical:
- Capturing Real-Time Behavior and Trends: Understanding the present is a critical component of making the personalization relevant. Capturing the high volume of incoming data to inform personalization requires a database platform that is capable of ingesting these bursts of events as they flow in.
- Learning from Historical and Real-Time Data for Recommendations: Historical data is critical to modelling the efficient personalization experience. This requires systems that are able to efficiently and inexpensively store extensive historical data.
- Rendering a Responsive Personalized Experience: It’s not enough for a personalization model to capture and analyze the past and present. It must, ultimately, be able to translate and render this data into a responsive personalized experience.
Capturing Real Time Behavior and Trends
To improve the precision of personalization, it’s important to capture the behavior of users in real-time. These events are likely the most relevant indicators of what the user will do next. However, capturing high velocity data requires a database that can keep up with it. In some cases, this translates to 100Ks of writes a second. Databases with in-memory architecture are essential to be able to keep up with these rates.
Capturing behavior also requires a flexible data model that can adapt to the agile model of development to accommodate emerging “behavior attributes”. Many database platforms rely on common-denominator data models, tables with rows & columns or JSON documents. However, efficiency of storing widely varying data in these formats is inefficient. A flexible data model is crucial for efficiency and ability to experiment on new attributes to capture.
Learning from Real-Time and Historical Data
Correlating immediate user behavior data with “past” data such as user preferences, demographics, and historical engagements is critical to training the correct behavior for the personalization experience. Historic data can be stored in systems like HDFS in Hadoop and processed for machine learning with platforms like Spark. Database platforms that support personalization must be designed to operate within these big data frameworks; it’s even better if they incorporate a native understanding of machine learning, so its implementation in production can incorporate real-time changes and updates to stay accurate.
Rendering a Responsive Personalized Experience
Not only do application users demand personalization that delights and inspires, they also demand that the experience be incredibly responsive. Fraud detection has to act before the money changes hand. Retail catalogs have to render the most recent trends with precision tailored to each individual’s behavior even as the user traffic scales.
Generating precise recommendations requires larger machine learning models. As the models grow, it is also important to retain their precision with a powerful central recommendation generation server.
This level of responsiveness and scale requires a low-latency, in-memory architecture that can house the machine learning models to generate and serve recommendations without missing a beat. This database platform also needs to be able to deliver a low-latency, personalized experience at massive scale to accommodate any number of concurrent users.
Platform for Mastering the Data Processing Challenges of Personalization
It’s clear that data processing challenges of capturing, learning, and personalization are vast. Cobbling together a personalization platform that consists of multiple database systems—each with its own strength—can divert the focus away from business problems.
How Redis Overcomes the Challenges of Real-Time Personalization
Redis is an in-memory database valued for its high performance, extreme data structure versatility, and modular extensibility. Redise is designed to address all data processing challenges for personalization systems. Here’s a closer look at each of these attributes.
Redis In-Memory Architecture
Redis is an in-memory database that can deliver the lowest latencies. The in-memory architecture enables faster data access in comparison to disk-based systems. It’s clear that fast is important to all three stages of personalization (i.e. capture, learn, and personalize). The fast data ingest is critical when:
- Capturing real-time events at scale
- Scanning data for real-time trends and generating real-time model
- Rendering a responsive personalization experience
Redis Data Structures
Redis is not a traditional key-value store that is only capable of storing keys and its associated values as strings. It’s a “data structures server” that can also hold more complex data structures such as sets, sorted sets, hashes, lists, bitmap, and hyperloglog; and it provides rich API or “verbs” on each one of the data types with simplified messaging, publishing, and subscribing and analytic functions. These data types simplify tracking user attributes and behavior for personalization with great flexibility and efficiency. Algorithms, like collaborative filtering, are implemented with great simplicity and high performance using unique Redis data structures, like Sorted Sets and Hashes – a sample recommendations engine can be found here implemented in Go.
Redis modules extend the data structures and API or the “verbs” within Redis. Modules simplify many of the personalization requirements.
- Series of incoming events can be easily captured using the time-series module. You can find the time-series modules here.
- Neural Redis module can be used to train and generate recommendations on real time data with fast training times. You can find more details on the Neural Redis module here.
- Redis-ML module (available here) allows ingesting machine learning models directly from well-known machines learning platforms, like Spark into Redis. The following is a sample set of commands that can be used to construct the random forest output with multiple trees under a single key.
Once this machine learning model is in Redis, Redis ML can generate real-time recommendations over this model for interactive applications. Here is how classifications would work with the myforest key.
The RedisTF (Redis-Tensorflow) module can also host “tensors” for a similar goal and can be found here.
These are only some of the examples that demonstrate how users can use modules today in order to simplify difficult personalization problems using Redis’ real-time, extensible architecture.
Redis with Big Data Ecosystem is Simple
Due to its popularity among developers, Redis has one of the largest and most engaging open source communities out there. At any given moment, there is a diverse set of users actively solving for different problems, therefore, contributing to Redis’ robust ecosystem of tools and products. Redis users don’t have to look far to find connectors, bridges, and add-on-modules specific to their use cases with Hadoop, Spark or other popular open source tools. Working with Hadoop systems, Spark, or other popular big data components is simple with Redis’ lively open source community.