Skip to main content
Entropy logoLink to Entropy
. 2023 Jul 14;25(7):1066. doi: 10.3390/e25071066

Fair Max–Min Diversity Maximization in Streaming and Sliding-Window Models

Yanhao Wang 1, Francesco Fabbri 2, Michael Mathioudakis 3,*, Jia Li 1
Editor: Jaesung Lee
PMCID: PMC10378839  PMID: 37510013

Abstract

Diversity maximization is a fundamental problem with broad applications in data summarization, web search, and recommender systems. Given a set X of n elements, the problem asks for a subset S of kn elements with maximum diversity, as quantified by the dissimilarities among the elements in S. In this paper, we study diversity maximization with fairness constraints in streaming and sliding-window models. Specifically, we focus on the max–min diversity maximization problem, which selects a subset S that maximizes the minimum distance (dissimilarity) between any pair of distinct elements within it. Assuming that the set X is partitioned into m disjoint groups by a specific sensitive attribute, e.g., sex or race, ensuring fairness requires that the selected subset S contains ki elements from each group i[m]. Although diversity maximization has been extensively studied, existing algorithms for fair max–min diversity maximization are inefficient for data streams. To address the problem, we first design efficient approximation algorithms for this problem in the (insert-only) streaming model, where data arrive one element at a time, and a solution should be computed based on the elements observed in one pass. Furthermore, we propose approximation algorithms for this problem in the sliding-window model, where only the latest w elements in the stream are considered for computation to capture the recency of the data. Experimental results on real-world and synthetic datasets show that our algorithms provide solutions of comparable quality to the state-of-the-art offline algorithms while running several orders of magnitude faster in the streaming and sliding-window settings.

Keywords: diversity maximization, group fairness, streaming algorithm, sliding-window algorithm

1. Introduction

Data summarization is a common approach to tackling the challenges of a large volume of data in data-intensive applications. That is because, rather than performing high-complexity analyses on the whole dataset, it is often beneficial to perform them on a representative and significantly smaller summary of the dataset, thus reducing the processing costs in terms of both running time and space usage. Typical techniques for data summarization [1] include sampling, sketching, coresets, and diverse data selection.

In this paper, we focus on diversity-aware data summarization, which finds application in a wide range of real-world problems. For example, in database query processing [2,3], web search [4,5], and recommender systems [6], the output might be too large to be presented to the user in its entirety, even after filtering the results by relevance. One feasible solution, then, is to present the user with a small but diverse subset that is easy to process and representative of the complete results. As another example, when training machine learning models on massive data, feature and subset selection is a standard method to improve efficiency. As indicated by [7,8], selecting diverse features or subsets can lead to a better balance between efficiency and accuracy. A key technical problem in such cases is diversity maximization [9,10,11,12,13,14,15,16,17,18,19,20].

In more detail, for a given set X of elements in some metric space and a size constraint k, diversity maximization asks for a subset of k elements with maximum diversity. Formally, diversity is quantified by a function that captures how well a subset spans the range of elements in X, and is typically defined in terms of distances or dissimilarities among elements in the subset. Prior studies [3,4,6,12] have suggested many different objectives of this kind. Two of the most popular ones are max–sum dispersion, which aims to maximize the sum of the distances between all pairs of elements in the selected subset S, and max–min dispersion, which aims to maximize the minimum distance between any pair of distinct elements in S. Figure 1 illustrates a selection of the 10 most diverse points from a two-dimensional point set with each of the two objectives for diversity maximization. As shown in Figure 1, max–sum dispersion tends to select “outliers” and may include highly similar elements in the solution, making it unsuitable for applications requiring more uniform coverage of the span of data. Therefore, we focus on diversity maximization with the objective of max–min dispersion, referred to as max–min diversity maximization, in this paper.

Figure 1.

Figure 1

Comparison of (a) max–sum dispersion (MSD) and (b) max–min dispersion (MMD) for diversity maximization on a dataset of one hundred points. We use circles and crossmarks to denote all points in the dataset and the points selected based on MSD and MMD.

In addition to diversity, fairness in data summarization is also attracting increasing attention [8,21,22,23,24,25,26,27]. Several studies reveal that the biases with respect to (w.r.t.) sensitive attributes, such as sex, race, or age, in underlying datasets can be retained in the summaries and could lead to unfairness in data-driven social computational systems such as education, recruitment, and banking [8,23,26]. One of the most common notions for fairness in data summarization is group fairness [8,21,22,23,27], which partitions the dataset into m disjoint groups based on a specific sensitive attribute and introduces a fairness constraint that limits the number of elements from group i in the data summary to ki for every group i[m] (see Figure 2 for an illustrative example). However, most existing methods for diversity maximization cannot easily be adapted to satisfy such fairness constraints. Moreover, a few methods that can deal with fairness constraints are specific to max–sum diversity maximization [9,11,13]. To the best of our knowledge, the methods in [17,20] are the only means of max–min diversity maximization with fairness constraints.

Figure 2.

Figure 2

Comparison of (a) unconstrained max–min diversity maximization and (b) fair max–min diversity maximization. We have a set of individuals, each described by two attributes, partitioned into two disjoint groups of red and blue, respectively. Fair diversity maximization returns a subset of size 10 that maximizes diversity in terms of attributes and contains an equal number (i.e., ki=5) of elements from both groups.

Furthermore, since many applications of diversity maximization are in the realm of massive data analysis, it is essential to design efficient algorithms for processing large-scale datasets. The (insert-only) streaming and sliding-window models are well-recognized frameworks for big data processing. In the streaming model, an algorithm is only permitted to process each element in the dataset sequentially in one pass, is allowed to take time and space that are sublinear to or even independent of the dataset size, and is required to provide solutions of comparable quality to those returned by the offline algorithms. In the sliding-window model, the computation is further restricted to the latest w elements in the stream, and an algorithm is required to find good solutions in sublinear time and space w.r.t. the window size. However, the only known algorithms [17,20] for fair max–min diversity maximization are designed for the offline setting and are very inefficient in the streaming and sliding-window models.

  • Our Contributions: In this paper, we propose novel streaming and sliding-window algorithms for the max–min diversity maximization problem with fairness constraints. Our main contributions are summarized as follows:

  • We formally define the problem of fair max–min diversity maximization (FDM) in metric spaces. Then, we describe the existing streaming and sliding-window algorithms for (unconstrained) max–min diversity maximization [14]. In particular, we improve the approximation ratio of the existing streaming algorithm from 1ε5 to 1ε2 for any parameter ε(0,1) by refining the analysis of [14].

  • We propose two novel streaming algorithms for FDM. Our first algorithm, called SFDM1, is 1ε4-approximate for FDM when there are two groups in the dataset. It takes O(klogΔε) time per element in the streaming processing, where Δ is the ratio of the maximum and minimum distances between any pair of elements, spends O(k2logΔε) time for post-processing, and stores O(klogΔε) elements in memory. Our second algorithm, called SFDM2, is 1ε3m+2-approximate for FDM with an arbitrary number m of groups. SFDM2 also takes O(klogΔε) time per element in the streaming processing but requires a longer Ok2mlogΔε·(m+log2k) time for post-processing and stores O(kmlogΔε) elements in memory.

  • We further extend our two streaming algorithms to the sliding-window model. The extended SWFDM1 and SWFDM2 algorithms achieve approximation factors of Θ(1) and Θ(m1) for FDM with m=2 and an arbitrary m when any Θ(1)-approximation algorithm for unconstrained max–min diversity maximization is used for post-processing. Additionally, their time and space complexities increase by a factor of O(logΔε) compared with SFDM1 and SFDM2, respectively.

  • Finally, we evaluate the performance of our proposed algorithms against the state-of-the-art algorithms on several real-world and synthetic datasets. The results demonstrate that our algorithms provide solutions of comparable quality for FDM to those returned by the state-of-the-art algorithms while running several orders of magnitude faster in the streaming and sliding-window settings.

A preliminary version of this paper is published in [28]. In this extended version, we make the following novel contributions with respect to [28]: (1) We propose two novel algorithms for FDM in the sliding-window model along with the implementation of an existing algorithm for unconstrained max–min diversity maximization in the sliding-window model [14]. Moreover, we analyze the approximation factors and complexities of the two algorithms for fair sliding-window diversity maximization; (2) We conduct more comprehensive examinations of our streaming algorithms by implementing and comparing them with a new offline baseline called FairGreedyFlow [20], which achieves a better approximation factor than previous offline algorithms. The additional results further confirm the superior performance of our streaming algorithms; (3) We conduct new experiments for FDM in the sliding-window setting to evaluate the performance of our sliding-window algorithms compared with the existing offline algorithms. The new experimental results validate their efficiency, effectiveness, and scalability.

  • Paper Organization: The rest of this paper is organized as follows. The related work is reviewed in Section 2. In Section 3, we introduce the basic concepts and formally define the FDM problem. In Section 4, we first propose our streaming algorithms for FDM. In Section 5, we further design our sliding-window algorithms for FDM. Our experimental setup and results are described in Section 6. Finally, we conclude the paper in Section 7.

2. Related Work

Diversity maximization has been extensively studied over the last two decades. Existing studies mainly focus on two popular objectives—i.e., max–sum dispersion [11,12,13,14,15,16,29,30,31] and max–min dispersion [12,14,16,17,18,20,31], and their variants [12,32].

An early study [33] proved that both max–sum and max–min diversity maximization problems are NP-hard even in metric spaces. The classic approaches to both problems are the greedy algorithms [34,35], which achieves the best possible approximation ratio of 12 unless P = NP. Indyk et al. [12] proposed composable coreset-based approximation algorithms for diversity maximization. Aghamolaei et al. [31] improved the approximation ratios in [12]. Ceccarello et al. [16] proposed coreset-based approximation algorithms for diversity maximization in MapReduce and streaming settings where the metric space has a bounded doubling dimension. Borassi et al. [14] proposed sliding-window algorithms for diversity maximization. Epasto et al. [36] further proposed improved sliding-window algorithms for diversity maximization specific to the Euclidean space. Drosou and Pitoura [18] studied max–min diversity maximization on dynamic data. They proposed a b12b2-approximation algorithm using a cover tree of base b. Bauckhage et al. [15] proposed an adiabatic quantum computing solution for max-sum diversification. Zhang and Gionis [19] extended diversity maximization to clustered data. Nevertheless, all the above methods only consider diversity maximization problems without fairness constraints.

There were several studies on diversity maximization under matroid constraints, of which the fairness constraints are special cases. Abbassi et al. [11] proposed a (12ε)-approximation local search algorithm for max–sum diversification under matroid constraints. Borodin et al. [9] proposed a (12ε)-approximation algorithm for maximizing the sum of a submodular function and a max–sum dispersion function. Cevallos et al. [30] extended the local search algorithm for distances of a negative type. They also proposed a PTAS for this problem via convex programming [29]. Bhaskara et al. [37] proposed a 18-approximation algorithm for sum–min diversity maximization under matroid constraints using linear relaxations. Ceccarello et al. [13] proposed a coreset-based approach to matroid-constrained max–sum diversification in metric spaces of bounded doubling dimension. Nevertheless, the above methods are still not applicable to the max–min dispersion problem. The only known algorithms for fair max–min diversity maximization in [17,20,38] are offline algorithms and inefficient for data streams. We will compare our proposed algorithms with these, both theoretically and empirically. To the best of our knowledge, there has not been any previous streaming or sliding-window algorithm for fair max–min diversity maximization.

In addition to diversity maximization, fairness has also been considered in many other data summarization problems, such as k-center [21,22,23], determinantal point processes [8], coresets for k-means clustering [24,25], and submodular maximization [26,27]. However, since their optimization objectives differ from diversity maximization, the proposed algorithms for their fair variants cannot be directly used for our problem.

3. Preliminaries

In this section, we introduce the basic concepts and formally define the fair max–min diversity maximization problem.

Let X be a set of n elements from a metric space with distance function d(·,·) capturing the dissimilarities among elements. Recall that d(·,·) is nonnegative, symmetric, and satisfies the triangle inequality—i.e., d(x,y)+d(y,z)d(x,z) for any x,y,zX. Note that all the algorithms and analyses in this paper are general for any distance metric. We further generalize the notion of distance to an element x and a set S as the distance between x and its nearest neighbor in S—i.e., d(x,S)=minySd(x,y).

Our focus in this paper is to find a small subset of most diverse elements from X. Given a subset SX, its diversity div(S) is defined as the minimum of the pairwise distances between any two distinct elements in S, i.e., div(S)=minx,yS,xyd(x,y). The unconstrained version of diversity maximization (DM) asks for a subset SX of k elements maximizing div(S), i.e., S*=arg maxSX:|S|=kdiv(S). We use OPT=div(S*) to denote the diversity of the optimal solution S* for DM. This problem has been proven to be NP-complete [33], and no polynomial-time algorithm can achieve an approximation factor that is better than 12 unless P=NP. One approach to DM is the 12-approximation greedy algorithm [34,39] (known as GMM) in the offline setting.

We introduce fairness to diversity maximization when X is composed of several demographic groups defined by a certain sensitive attribute, e.g., sex or race. Formally, suppose that X is divided into m disjoint groups [1,,m] ([m] for short) and a function c:X[m] maps each element xX to its group. Let Xi={xX:c(x)=i} be the subset of elements from group i in X. Obviously, we have i=1mXi=X and XiXj= for any ij. The fairness constraint assigns a positive integer ki to each of the m groups and restricts the number of elements from group i in the solution to ki. We assume that i=1mki=k. The fair max-min diversity maximization problem is defined as:

Definition 1

(FDM). Given a set X of n elements with X=i=1mXi and m size constraints k1,,kmZ+, find a subset S that contains ki elements from Xi and maximizes div(S)—i.e., Sf*=arg maxSX:|SXi|=ki,i[m]div(S).

We use OPTf=div(Sf*) to denote the diversity of the optimal solution Sf* for FDM. Since DM is a special case of FDM when m=1, FDM is NP-hard up to a 12-approximation. In addition, our FDM problem is closely related to the concept of matroid [40] in combinatorics. Given a ground set V, a matroid is a pair M=(V,I) where I is a family of subsets of V (called independent sets) with the following properties: (i) I; (ii) for each ABV, if BI then AI (hereditary); and (iii) if AI, BI, and |A|>|B|, then there exists xAB such that B{x}I (augmentation). An independent set is maximal if it is not a proper subset of any other independent set. A basic property of M is that all its maximal independent sets have the same size, denoted as the matroid’s rank. As is easy to verify, our fairness constraint is a case of rank-k partition matroids, where the ground set is partitioned into disjoint groups and the independent sets are exactly the sets in which, for each group, the number of elements from this group is, at most, the group capacity. Our algorithms for general m in Section 4 and Section 5 will be built on matroids.

In this paper, we first consider FDM in the streaming setting, where the elements in X arrive one at a time. Here, we use t(x) to denote the time when an element x is observed and X(T)={xX:t(x)T} to denote the subset of elements observed from X until time T. A streaming algorithm should process each element sequentially in one pass using limited space (typically independent of n) and return a valid approximate solution S (if it exists) for FDM on X(T) at any time T. We further study FDM in the sliding-window setting, where the window W(T) always contains the last w elements observed from X until time T, i.e., W(T)={xX:Tw+1t(x)T}. A sliding-window algorithm should provide a valid approximate solution S (if it exists) for FDM on W(T) at any time T.

4. Streaming Algorithms

As has been shown in Section 3, FDM is NP-hard. Thus, we focus on efficient approximation algorithms for FDM. In this section, we first describe the existing algorithms for unconstrained diversity maximization in the streaming model on which our streaming algorithms will be built. We then propose a 1ε4-approximation streaming algorithm for FDM in the special case that there are only two groups in the dataset. Finally, we propose a 1ε3m+2-approximation streaming algorithm for FDM on a dataset with an arbitrary number m of groups.

4.1. (Unconstrained) Streaming Algorithm

We first present the streaming algorithm of [14] for (unconstrained) diversity maximization in Algorithm 1. Let dmin=minx,yX,xyd(x,y), dmax=maxx,yX,xyd(x,y) and Δ=dmaxdmin. Obviously, it always holds that OPT[dmin,dmax]. First, it maintains a sequence U of values for guessing OPT within a relative error of 1ε and initializes an empty solution Sμ for each μU before processing the stream (Lines 1 and 2). Then, for each xX and each μU, if Sμ contains less than k elements and the distance between x and Sμ is at least μ, it will add x to Sμ (Lines 3–6). After processing all elements in X, the candidate solution that contains k elements and maximizes the diversity is returned as the solution S for DM (Line 7). Algorithm 1 is proven to be a 1ε5-approximation algorithm for max–min diversity maximization [14]. In Theorem 1, its approximation ratio is improved to 1ε2 by refining the analysis of [14].

Algorithm 1 SDM
  • Input:

    Stream X, distance metric d(·,·), parameter ε(0,1), solution size kZ+

  • Output:

    A set SX with |S|=k

  • 1:

    U={dmin(1ε)j:jZ0+(1ε)jdmindmax}

  • 2:

    Initialize Sμ= for each μU

  • 3:

    for all xX do

  • 4:

        for all μU do

  • 5:

            if |Sμ|<k and d(x,Sμ)μ then

  • 6:

               SμSμ{x}

  • 7:

    return Sarg maxμU:|Sμ|=kdiv(Sμ)

Theorem 1.

Algorithm 1 is a 1ε2-approximation algorithm for max–min diversity maximization.

Proof. 

For each μU, there are two cases for Sμ after processing all elements in X: (1) If |Sμ|=k, the condition of Line 5 guarantees that div(Sμ)μ; (2) If |Sμ|<k, it holds that d(x,Sμ)<μ for every xXSμ since the fact that x is not added to Sμ implies that d(x,Sμ)<μ, as |Sμ|<k. Let us consider a candidate solution Sμ with |Sμ|<k. Suppose that S*={s1*,,sk*} is the optimal solution for DM on X. We define a function f:S*Sμ that maps each element in S* to its nearest neighbor in Sμ. As is shown above, d(s*,f(s*))<μ for each s*S*. Because |Sμ|<k and |S*|=k, two distinct elements sa*,sb*S* with f(sa*)=f(sb*) must exist. For such sa*,sb*, we have

d(sa*,sb*)d(sa*,f(sa*))+d(sb*,f(sb*))<2μ

according to the triangle inequality. Thus, OPT=div(S*)d(sa*,sb*)<2μ if |Sμ|<k. Let μ be the smallest μU with |Sμ|<k. We obtained div(S*)<2μ from the above results. Additionally, for μ=(1ε)μ, we must have |Sμ|=k and div(Sμ)μ. Therefore, we have div(S)μ=(1ε)μ1ε2·div(S*).    □

In terms of complexity, Algorithm 1 stores O(klogΔε) elements and takes O(klogΔε) time per element, since it makes O(logΔε) guesses for OPT, keeps, at most, k elements in each candidate and requires, at most, k distance computations to decide whether to add an element to a candidate.

4.2. Fair Streaming Algorithm for m=2

The procedure of our streaming algorithm in case of m=2, called SFDM1, is described in Algorithm 2 and illustrated in Figure 3. In general, the algorithm runs in two phases: stream processing and post-processing. In the stream processing (Lines 1–6), for each guess μU of OPTf, it utilizes Algorithm 1 to keep a group-blind candidate Sμ with size constraint k and two group-specific candidates Sμ,1 and Sμ,2 with size constraints k1 and k2 for X1 and X2, respectively. The only difference from Algorithm 1 is that the elements are filtered by group to maintain Sμ,1 and Sμ,2. After processing all elements of X in one pass, it will post-process the group-blind candidates to make them satisfy the fairness constraint (Lines 7–15). The post-processing is only performed on a subset U of U, where Sμ contains k elements and Sμ,i contains ki elements for each group i{1,2}. For each μU, Sμ, either has satisfied the fairness constraint or has one over-filled group io and another under-filled group iu. If Sμ is not yet a fair solution, Sμ will be balanced for fairness by first adding kiukiu elements, where kiu=|SμXiu|, from Sμ,iu to Sμ, and then removing the same number of elements from SμXio. The elements to be added and removed are selected greedily, as in GMM [39], to minimize the loss in diversity: the element in Sμ,iu that is the furthest from SμXiu is picked for each insertion; and the element in SμXio that is the closest to SμXiu is picked for each deletion. Finally, the fair candidate with the maximum diversity after post-processing is returned as the final solution for FDM (Line 16). Next, we will theoretically analyze the approximation ratio and complexity of SFDM1.

Algorithm 2 SFDM1
  • Input:

    Stream X=X1X2, distance metric d(·,·), parameter ε(0,1), size constraints k1,k2Z+ (k=k1+k2)

  • Output:

    A set SX s.t. |SXi|=ki for i{1,2}

  • Stream processing

  • 1:

    U={dmin(1ε)j:jZ0+(1ε)jdmindmax}

  • 2:

    Initialize Sμ,Sμ,i= for every μU and i{1,2}

  • 3:

    for all xX do

  • 4:

        Run Lines 3–6 of Algorithm 1 to update Sμ w.r.t. x

  • 5:

        if c(x)=i then

  • 6:

            Run Lines 3–6 of Algorithm 1 to update Sμ,i w.r.t. x with size constraint ki

  • Post-processing

  • 7:

    U={μU:|Sμ|=k|Sμ,i|=ki,i{1,2}}

  • 8:

    for all μU do

  • 9:

        if |SμXi|<ki for some i{1,2} then

  • 10:

            while |SμXi|<ki do

  • 11:

               x+arg maxxSμ,id(x,SμXi)

  • 12:

               SμSμ{x+}

  • 13:

            while |Sμ|>k do

  • 14:

               xarg minxSμXid(x,SμXi)

  • 15:

               SμSμ{x}

  • 16:

    return Sarg maxμUdiv(Sμ)

Figure 3.

Figure 3

Illustration of the SFDM1 algorithm. During stream processing, one group-blind and two group-specific candidates are maintained for each guess μ of OPTf. Then, a subset of group-blind candidates is selected for post-processing by adding the elements from the under-filled group before deleting the elements from the over-filled one.

  • Theoretical Analysis: We prove that SFDM1 achieves an approximation ratio of 1ε4 for FDM, where ε(0,1), in Theorem 2. The proof is based on (i) the existence of μU such that μ1ε2·OPTf (Lemma 1) and (ii) div(Sμ)μ2 for each μU after post-processing (Lemma 2). Then, we analyze the complexity of SFDM1 in Theorem 3.

Lemma 1.

Let μ be the largest μU. It holds that μ1ε2·OPTf, where OPTf is the optimal diversity of FDM on X.

Proof. 

First of all, we have OPTfOPT, where OPT is the optimal diversity of unconstrained DM with k=k1+k2 on X, since any valid solution for FDM must also be a valid solution for DM. Moreover, it holds that OPTfOPTki, where OPTki is the optimal diversity of unconstrained DM with size constraint ki on Xi for both i{1,2}, because the optimal solution must contain ki elements from Xi and div(·) is a monotonically non-increasing function—i.e., div(S{x})div(S) for any SX and xSX. Therefore, we prove that OPTfdiv(S*Xi)OPTki.

Then, according to the results of Theorem 1, we have OPT<2μ if Sμ<k and OPTki<2μ if Sμ,i<ki for each i{1,2}. Note that μ is the largest μU, such that |Sμ|=k, |Sμ,1|=k1, and |Sμ,2|=k2 after stream processing. For μ=μ1εU, we have either |Sμ|<k or |Sμ,i|<ki for some i{1,2}. Therefore, it holds that OPTf<2μ21ε·μ and we conclude the proof.    □

Lemma 2.

For each μU, the candidate solution Sμ must satisfy div(Sμ)μ2 and |SμXi|=ki for both i{1,2} after post-processing.

Proof. 

The candidate Sμ before post-processing has exactly k=k1+k2 elements but may not contain k1 elements from X1 and k2 elements from X2. If Sμ has exactly k1 elements from X1 and k2 elements from X2, and thus the post-processing is skipped, we have div(Sμ)μ according to Theorem 1. Otherwise, assuming that |SμX1|=k1<k1, we will add k1k1 elements from Sμ,1 to Sμ and remove k1k1 elements from SμX2 to ensure the fairness constraint. In Line 16, all the k1 elements in Sμ,1 can be selected for insertion. Since the minimum distance between any pair of elements in Sμ,1 is at least μ, we can find, at most, one element xSμ,1, such that d(x,y)<μ2 for each ySμX1. This means that there are at least k1k1 elements from Sμ,1 whose distances to all the existing elements in SμX1 are greater than μ2. Accordingly, after adding k1k1 elements from Sμ,1 to Sμ greedily, it still holds that d(x,y)μ2 for any x,ySμX1. In Line 14, for each element xSμX2, there is, at most, one (newly added) element ySμX1 such that d(x,y)<μ2. Meanwhile, it is guaranteed that y is the nearest neighbor of x in Sμ in this case. Therefore, in Line 14, every xSμX2 with d(x,SμX2)<μ2 is removed, since there are, at most, k1k1 such elements and the one with the smallest d(x,SμX2) is removed at each step. Therefore, Sμ contains k1 elements from X1 and k2 elements from X2 and div(Sμ)μ2 after post-processing.    □

Theorem 2.

SFDM1 returns a 1ε4-approximate solution for FDM.

Proof. 

According to the results of Lemmas 1 and 2, we have div(S)div(Sμ)μ21ε4·OPTf, where μ=maxμUμ.    □

Theorem 3.

SFDM1 stores O(klogΔε) elements in memory, takes O(klogΔε) time per element for stream processing, and O(k2logΔε) time for post-processing.

Proof. 

SFDM1 keeps three candidates for each μU and O(k) elements in each candidate. Hence, the total number of stored elements is O(klogΔε), since |U|=O(logΔε). The stream processing performs, at most, O(klogΔε) distance computations per element. Finally, for each μU in the post-processing, at most ki(kiki) distance computations are performed to select the elements in Sμ,i that are to be added to Sμ. To find the elements that are to be removed, at most k(kiki) distance computations are needed. Thus, the time complexity for post-processing is O(k2logΔε) as |U|=O(logΔε).    □

  • Comparison with Prior Art: The idea of finding a solution and balancing it for fairness in SFDM1 has also been used for FairSwap [17]. However, FairSwap only works in the offline setting, which keeps the dataset in memory and requires random accesses for computation, whereas SFDM1 works in the streaming setting, which scans the dataset in one pass and uses only the elements in the candidates for post-processing. Compared with FairSwap, SFDM1 reduces the space complexity from O(n) to O(klogΔε) and the time complexity from O(nk) to O(k2logΔε) at the expense of lowering the approximation ratio by a factor of 1ε.

4.3. Fair Streaming Algorithm for General m

The detailed procedure of our streaming algorithm, which can work with an arbitrary m2, called SFDM2, is presented in Algorithm 3. Similar to SFDM1, it also has two phases: stream processing and post-processing. In the stream processing (Lines 1–7), it utilizes Algorithm 1 to keep a group-blind candidate Sμ and m group-specific candidates Sμ,1,,Sμ,m for all the m groups. The difference from SFDM1 is that the size constraint of each group-specific candidate for each group i is k instead of ki. Then, after processing all elements in X, a post-processing scheme is required to ensure the fairness of candidates. Nevertheless, the post-processing procedures are totally different from SFDM1, since the swap-based balancing strategy cannot guarantee the validity of the solution with any theoretical bound. Like SFDM1, the post-processing is performed on a subset U, where Sμ has k elements and Sμ,i has at least ki elements for each group i (Line 8). For each μU, it initializes with a subset Sμ of Sμ (Line 10). For an over-filled group i, i.e., |SμXi|>ki, Sμ contains ki arbitrary elements from Sμ. For an under-filled or exactly filled group i, i.e., |SμXi|ki, Sμ contains all ki=|SμXi| elements from Sμ. Next, new elements from under-filled groups should be added to Sμ so that Sμ is a fair solution. The method used to find the elements that are to be added is to divide the set Sall of elements in all candidates into a set C of clusters, which guarantees that d(x,y)μm+1 for any xCa and yCb (Lines 12–15), where Ca and Cb are two different clusters in C. Then, Sμ is limited to contain, at most, one element from each cluster after new elements are added so that div(Sμ)μm+1. Meanwhile, Sμ should still satisfy the fairness constraint. To meet both requirements, the problem of adding new elements to Sμ is formulated as an instance of matroid intersection [41,42,43], as will be discussed subsequently (Line 17). Finally, it returns Sμ containing k elements with maximum diversity after post-processing as the final solution for FDM (Line 18). An illustration of the post-processing procedure of SFDM2 is given in Figure 4.

Algorithm 3 SFDM2
  • Input:

    Stream X=i=1mXi, distance metric d, parameter ε(0,1), size constraints k1,,kmZ+ (k=i=1mki)

  • Output:

    A set SX s.t. |SXi|=ki, i[m]

  • Stream processing

  • 1:

    U={dmin(1ε)j:jZ0+(1ε)jdmindmax}

  • 2:

    Initialize Sμ,Sμ,i= for every μU and i[m]

  • 3:

    for all xX do

  • 4:

        for all μU and i[m] do

  • 5:

            Run Lines 3–6 of Algorithm 1 to update Sμ w.r.t. x

  • 6:

            if c(x)=i then

  • 7:

               Run Lines 3–6 of Algorithm 1 to update Sμ,i w.r.t. x

  • Post-processing

  • 8:

    U={μU:|Sμ|=k|Sμ,i|ki,i[m]}

  • 9:

    for all μU do

  • 10:

        For each group i[m], pick min(ki,|SμXi|) elements arbitrarily from Sμ as Sμ

  • 11:

        Let Sall=(i=1mSμ,i)Sμ and l=|Sall|

  • 12:

        Create l clusters C={C1,,Cl}, each of which contains one element in Sall

  • 13:

        while there exist Ca,CbC s.t. d(x,y)<μm+1 for some xCa and yCb do

  • 14:

            Merge Ca,Cb into a new cluster C=CaCb

  • 15:

            CC{Ca,Cb}{C}

  • 16:

        Let M1=(Sall,I1) and M2=(Sall,I2) be two matroids, where SI1 iff |SXi|ki, i[m] and SI2 iff |SC|1, CC

  • 17:

        Run Algorithm 4 to augment Sμ such that Sμ is a maximum cardinality set in I1I2

  • 18:

    return Sarg maxμU:|Sμ|=kdiv(Sμ)

Figure 4.

Figure 4

Illustration of post-processing in SFDM2. For each μU, an initial Sμ is first extracted from Sμ by removing the elements from over-filled groups. Then, the elements in all candidates are divided into clusters. The final Sμ is augmented from the initial solution by adding new elements from under-filled groups based on matroid intersection.

  • Matroid Intersection: Next, we describe how to use matroid intersection for solution augmentation in SFDM2. We define the first rank-k matroid M1=(V,I1) based on the fairness constraint, where the ground set V is Sall and SI1 iff |SXi|ki, i[m]. Intuitively, a set S is fair if it is a maximal independent set in I1. Moreover, we define the second rank-l (l=|C|) matroid M2=(V,I2) on the set C of clusters, where the ground set V is also Sall and SI2 if |SC|1, CC. Accordingly, the problem of adding new elements to Sμ to ensure fairness is an instance of the matroid intersection problem, which aims to find a maximum cardinality set SI1I2 for M1=(Sall,I1) and M2=(Sall,I2). Here, we adopt Cunningham’s algorithm [41], a well-known solution for the matroid intersection problem based on the augmentation graph in Definition 2.

Definition 2

(Augmentation Graph [41]). Given two matroids M1=(V,I1) and M2=(V,I2), a set SV, such that SI1I2, and two sets V1={xVS:S{x}I1} and V2={xVS:S{x}I2}, an augmentation graph is a digraph G=(V{a,b},E), where a,bV. There is an edge (a,x)E for each xV1. There is an edge (x,b)E for each xV2. There is an edge (y,x)E for each xVS, yS, such that S{x}I1 and S{x}{y}I1. There is an edge (x,y)E for each xVS, yS, such that S{x}I2 and S{x}{y}I2.

Specifically, the Cunningham’s algorithm [41] is initialized with S= (or any SI1I2). At each step, it builds an augmentation graph G for M1, M2, and S. If there is no directed path from a to b in G, then S is already a maximum cardinality set. Otherwise, it finds the shortest path P* from a to b in G, and augments S according to P*. For each xVS, except a and b, add x to S; for each xS, remove x from S. We adapt Cunningham’s algorithm [41] for our problem, as shown in Algorithm 4. Our algorithm is initialized with Sμ instead of . In addition, to reduce the cost of building G and maximize the diversity, it first adds the elements in V1V2 greedily to Sμ until V1V2=. This is because a shortest path, P*=a,x,b in G, exists for any xV1V2, which is easy to verify from Definition 2. Finally, if |S|<k after the above procedures, the standard Cunningham’s algorithm will be used to augment S to ensure the maximality of S.

Algorithm 4 Matroid Intersection
  • Input:

    Two matroids M1=(V,I1), M2=(V,I2), distance metric d, initial set S0V

  • Output:

    A maximum cardinality set SV in I1I2

  • 1:

    Initialize SS0, V1={xVS:S{x}I1}, and V2={xVS:S{x}I2}

  • 2:

    while V1V2 do

  • 3:

        x*arg maxxV1V2d(x,S) and SS{x*}

  • 4:

        for all xV1 do

  • 5:

            V1V1{x} if S{x}I1

  • 6:

        for all xV2 do

  • 7:

            V2V2{x} if S{x}I2

  • 8:

    Build an augmentation graph G for S

  • 9:

    while there is a directed path from a to b in G do

  • 10:

        Let P* be a shortest path from a to b in G

  • 11:

        for all xP*{a,b} do

  • 12:

            SS{x} if xS

  • 13:

            SS{x} if xS

  • 14:

        Rebuild G for the updated S

  • 15:

    returnS

  • Theoretical Analysis: We prove that SFDM2 achieves an approximation ratio of 1ε3m+2 for FDM. The high-level idea of the proof is to connect the clustering procedure in post-processing with the notion of matroid and then to utilize the geometric properties of the clusters and the theoretical results of matroid intersection for approximation. Next, we first show that the set C of clusters has several important properties (Lemma 3). Then, we prove that Algorithm 4 can return a fair solution for a specific μ based on the properties of C (Lemma 4). Finally, we analyze the time and space complexities of SFDM2 in Theorem 5.

Lemma 3.

The set C of clusters has the following properties: (i) for any xCa and yCb (ab), d(x,y)μm+1; (ii) each cluster C contains, at most, one element from Sμ and Sμ,i for any i[m]; (iii) for any x,yC, d(x,y)<mm+1·μ.

Proof. 

First of all, Property (i) holds from Lines 12–15 of Algorithm 3, since all clusters that do not satisfy it have been merged. Then, we prove Property (ii) by contradiction. Let us construct an undirected graph G=(V,E) for a cluster CC, where V is the set of elements in C and there exists an edge (x,y)E iff d(x,y)<μm. Based on Algorithm 3, for any xC, there must exist some yC (xy) such that d(x,y)<μm. Therefore, G is a connected graph. Suppose that C contains more than one element from Sμ or Sμ,i for some i[m]. Let Px,y=(x,,y) be the shortest path of G between x and y, where x and y are both from Sμ or Sμ,i. Next, we show that the length of Px,y is, at most, m+1. If the length of Px,y is longer than m+1, there will be a sub-path Px,y of Px,y where x and y are both from Sμ or Sμ,i, and this violates the fact that Px,y is the shortest. Since the length of Px,y is, at most, m+1, we have d(x,y)<(m+1)·μm+1=μ, which contradicts the fact that d(x,y)μ, as they are both from Sμ or Sμ,i. Finally, Property (iii) is a natural extension of Property (ii): since each cluster C contains, at most, one element from Sμ and Sμ,i for any i[m], C has, at most, m+1 elements. Therefore, for any two elements x,yC, the length of the path between them is, at most, m in G and d(x,y)<m·μm+1=mm+1·μ.    □

Lemma 4.

If OPTf3m+2m+1·μ, then Algorithm 4 returns a size-k subset Sμ, such that SμI1I2 and div(Sμ)μm+1.

Proof

First of all, the initial Sμ is a subset of Sμ. According to Property (ii) of Lemma 3, all elements of Sμ are in different clusters of C, and thus SμI1I2. The theoretical results in [41] guarantee that Algorithm 4 can find a size-k set in I1I2, as long as it exists. Next, we will show such a set exists when OPTf3m+2m+1·μ. To verify this, we need to identify ki clusters of C that contain at least one element from Xi for each i[m] and show that all k=i=1mki clusters are distinct. Here, we consider two cases for each group i[m].

  • Case 1: For each i[m], such that ki|Sμ,i|<k, we have d(x,Sμ,i)<μ for each xXi. Given the optimal solution Sf*, we define a function f that maps each x*Sf* to its nearest neighbor in Sμ,i. For two elements xa*,xb*Sf* in these groups, we have d(xa*,f(xa*))<μ, d(xb*,f(xb*))<μ, and d(xa*,xb*)OPTf=div(Sf*). Therefore, d(f(xa*),f(xb*))>OPTf2μ. Since OPTf3m+2m+1·μ, d(f(xa*),f(xb*))>3m+2m+1·μ2μ=mm+1·μ. According to Property (iii) of Lemma 3, it is guaranteed that f(xa*) and f(xb*) are in different clusters. By identifying all the clusters that contain f(x*) for all x*Sf*, we found ki clusters for each group i[m] such that ki|Sμ,i|<k. All the clusters that were found are guaranteed to be distinct.

  • Case 2: For all i[m] such that |Sμ,i|=k, we are able to find k clusters that contain one element from Sμ,i based on Property (ii) of Lemma 3. For such a group i, even though kki clusters have been identified for all other groups, there are still at least ki clusters available for selection. Therefore, we can always find ki clusters that are distinct from all the clusters identified by any other group for such a group Xi.

Considering both cases, we have proven the existence of a size-k set in I1I2. Finally, for any set SI2, we have div(S)μm+1 according to Property (i) of Lemma 3.    □

Theorem 4.

SFDM2 is a 1ε3m+2-approximation algorithm for FDM.

Proof. 

Let μ be the smallest μU. It holds that μOPTf2 (see Lemma 1). Thus, there is some μ<μ in U, such that μ[(m+1)(1ε)3m+2·OPTf,m+13m+2·OPTf], as m+13m+2<12 for any mZ+. Therefore, SFDM2 provides a fair solution S, such that div(S)div(Sμ)μm+11ε3m+2·OPTf.    □

Theorem 5.

SFDM2 keeps O(kmlogΔε) elements in memory, takes O(klogΔε) time per element in the stream processing, and spends Ok2mlogΔε·(m+log2k) time for post-processing.

Proof. 

SFDM2 keeps m+1 candidates for each μU and O(k) elements in each candidate. So, the total number of elements stored by SFDM2 is O(kmlogΔε). Only two candidates are checked in streaming processing for each element and thus O(klogΔε) distance computations are needed. In the post-processing of each μ, we need O(k) time to get the initial solution, O(k2m2) time to cluster Sall, and O(k2m) time to augment the candidate using Lines 2–7 of Algorithm 4. The time complexity of Cunningham’s algorithm is O(k2mlog2k) according to [42,43]. In sum, the overall time complexity of post-processing is Ok2mlogΔε·(m+log2k).    □

  • Comparison with Prior Art: Existing methods have aimed to find a fair solution based on matroid intersection for fair k-center [21,22,44] and fair max–min diversity maximization [17]. SFDM2 adopts a similar method to FairFlow [17] to construct the clusters and matroids. However, FairFlow solves matroid intersection as a max-flow problem on a directed graph. Its solution is of poor quality in practice, particularly when m is large. Therefore, SFDM2 uses a different method for matroid intersection based on Cunningham’s algorithm, which initializes with a partial solution instead of an empty set for higher efficiency and adds elements greedily like GMM [39] for higher diversity. Hence, SFDM2 has a significantly higher solution quality than FairFlow in practice, though it has a slightly lower approximation ratio.

5. Sliding-Window Algorithms

In this section, we extend our streaming algorithms, i.e., SFDM1 and SFDM2, to the sliding-window model. In Section 5.1, we first present the existing sliding-window algorithm for (unconstrained) diversity maximization [14]. In Section 5.2, we propose our extended sliding-window algorithms for FDM based on the algorithms in Section 4 and Section 5.1.

5.1. (Unconstrained) Sliding-Window Algorithm

The unconstrained sliding-window algorithm is shown in Algorithm 5 and illustrated in Figure 5. First of all, it keeps two sequences Λ,U, both ranging from dmin to dmax, to guess the optimum OPT[W] of DM on the window W (Line 1). For each combination of λΛ and μU, it initializes two candidate solutions Aλ,μ and Bλ,μ, each of which will be maintained by Algorithm 1 on two consecutive sub-sequences of X. Two sets Aλ,μ and Bλ,μ to store the replacements of the elements in Aλ,μ and Bλ,μ, in case that they fall out of the sliding window, are also initialized as empty sets (Lines 2 and 3). Then, for each element xX, it adds x to each Bλ,μ using the same method as Algorithm 1. Once x is added to Bλ,μ, it will be set as its own replacement in Bλ,μ (Lines 7 and 8). Otherwise, it checks whether the distance between x and any existing element in Bλ,μ is, at most, μ and assigns x as the replacement of such an element in Bλ,μ (Line 10). Similarly, it also checks whether x can replace any element in Aλ,μ and perform the assignment if so (Line 12). After that, if the diversity of any candidate Bλ,μ with |Bλ,μ|=k exceeds λ, it will remove x from Bλ,μ,Bλ,μ and set them as Aλ,μ,Aλ,μ, and then re-initialize a new Bλ,μ and Bλ,μ with x (Lines 13–16). We describe the post-processing procedure for the window W containing the last w elements in X, which can be easily extended to any window W(T) at time T, in Lines 17–23. It considers two cases for different values of λ,μ: (i) when Aλ,μW, it runs any algorithm ALG for (centralized) max–min diversity maximization on Aλ,μBλ,μ to find a size-k candidate solution Sλ,μ (Line 20); (ii) when Bλ,μW, ALG is run on (WAλ,μ)Bλ,μ, i.e., the non-expired elements from Aλ,μ and Bλ,μ, instead (Line 22). Finally, the best solution found after post-processing all candidates is returned as the solution S for the window W (Line 23).

Algorithm 5 SWDM
  • Input:

    Stream X, distance metric d(·,·), window size wZ+, parameter ε(0,1), solution size kZ+

  • Output:

    A set SW with |S|=k

  • 1:

    Λ,U={dmin(1ε)j:jZ0+(1ε)jdmindmax}

  • 2:

    for allλΛ and μU do

  • 3:

        Initialize Aλ,μ,Aλ,μ= and Bλ,μ,Bλ,μ=

  • 4:

    for all xX do

  • 5:

        for all λΛ do

  • 6:

            for all μU do

  • 7:

               if |Bλ,μ|<k and d(x,Bλ,μ)μ then

  • 8:

                   Bλ,μBλ,μ{x}, Bλ,μ[x]x

  • 9:

               else if d(x,Bλ,μ)<μ then

  • 10:

                   yarg minyBλ,μd(x,y), Bλ,μ[y]x

  • 11:

               if Aλ,μ and d(x,Aλ,μ)<μ then

  • 12:

                   yarg minyAλ,μd(x,y), Aλ,μ[y]x

  • 13:

            if maxμU:|Bλ,μ|=kdiv(Bλ,μ)>λ then

  • 14:

               Remove x from each Bλ,μ,Bλ,μ

  • 15:

               Aλ,μ,Aλ,μBλ,μ,Bλ,μ for each μU

  • 16:

               Bλ,μ,Bλ,μ{x} for each μU

  • Post-processing

  • 17:

    W{xX:max{1,|X|w+1}t(x)|X|}

  • 18:

    for allλΛ and μU do

  • 19:

        if Aλ,μW then

  • 20:

            Sλ,μALG(k,Aλ,μBλ,μ)

  • 21:

        else if Bλ,μW then

  • 22:

            Sλ,μALG(k,(WAλ,μ)Bλ,μ)

  • 23:

    return Sarg maxλΛ,μU:|Sλ,μ|=kdiv(Sλ,μ)

Figure 5.

Figure 5

Illustration of the framework of sliding-window algorithms. During stream processing, two candidate solutions Aλ,μ and Bλ,μ, along with their backups Aλ,μ and Bλ,μ, are maintained for each guess λ,μ of OPT[W]. Then, during post-processing, the elements in Bλ,μ and Aλ,μ (or non-expired elements in Aλ,μ if Aλ,μ has expired) are passed to an existing algorithm for solution computation.

5.2. Fair Sliding-Window Algorithms

Generally, to extend SFDM1 and SFDM2 so that they can work in the sliding-window model, we need to modify them in two aspects: (i) the stream processing should follow the procedure of Algorithm 5 instead of Algorithm 1 to maintain the candidate solutions for the case when old elements are deleted from the window W; (ii) the post-processing should be adjusted for the candidate solutions kept by Algorithm 5 during stream processing with theoretical guarantees.

Specifically, the procedures of our extended algorithms, i.e., SWFDM1 and SWFDM2 are presented in Algorithm 6. Here, we put the descriptions of both algorithms together because they share many common subroutines and inherit some others from Algorithms 2–5. Following the procedure of Algorithm 5, they initialize the candidate solutions for different guesses λ,μ of OPT[W] in the sequences Λ and U. In the stream processing (Lines 1–11), SWFDM1 and SWFDM2 adopts the same method as used in Algorithm 5 to maintain the unconstrained candidate solutions as well as the monochromatic candidate solutions for each group i[m]. The only difference is the solution size of each monochromatic candidate, which is ki for i{1,2} in SWFDM1 but k for each i[m] in SWFDM2.

The following theorem indicates the approximation factor of Algorithm 5.

Theorem 6.

Algorithm 5 is a ξε5-approximation algorithm for max–min diversity maximization when a ξ-approximation algorithm ALG for (centralized) max–min diversity maximization is used for post-processing.

We refer readers to Lemma 4.7 in [14] for the proof of Theorem 6. Here, if GMM [39], which is 12-approximate for max–min diversity maximization, is used as ALG, the approximation factor of Algorithm 5 will be 1ε10. In terms of complexity, Algorithm 5 stores O(klog2Δε2) elements, takes O(klog2Δε2) time per element for stream processing, and spends O(k2log2Δε2) time for post-processing.

Algorithm 6 SWFDM
  • Input:

    Stream X=i=1mXi, distance metric d(·,·), parameter ε(0,1), window size wZ+, size constraints k1,,km (k=i=1mki)

  • Output:

    A set SW s.t. |SXi|=ki for i[m]

  • Stream processing

  • 1:

    Λ,U={dmin(1ε)j:jZ0+(1ε)jdmindmax}

  • 2:

    for all λΛ,μU do

  • 3:

        Initialize Aλ,μ,Aλ,μ,Bλ,μ,Bλ,μ=

  • 4:

        for all i[m] do

  • 5:

            Initialize Aλ,μ(i),Aλ,μ(i),Bλ,μ(i),Bλ,μ(i)=

  • 6:

    for all xX do

  • 7:

        Run Lines 5–16 of Algorithm 5 to update Aλ,μ, Aλ,μ, Bλ,μ, and Bλ,μ w.r.t. x

  • 8:

        if m=2c(x)=i and ‘SWFDM1’ is used then

  • 9:

            Run Lines 5–16 of Algorithm 5 to update to update Aλ,μ(i), Aλ,μ(i), Bλ,μ(i), and Bλ,μ(i) w.r.t. x under size constraint ki

  • 10:

        else if c(x)=i and ‘SWFDM2’ is used then

  • 11:

            Run Lines 5–16 of Algorithm 5 to update to update Aλ,μ(i), Aλ,μ(i), Bλ,μ(i), and Bλ,μ(i) w.r.t. x under size constraint k

  • Post-processing

  • 12:

    W{xX:max{1,|X|w+1}t(x)|X|}

  • 13:

    for allλΛ and μU do

  • 14:

        if Aλ,μW then

  • 15:

            Sλ,μALG(k,Aλ,μBλ,μ)

  • 16:

        else if Bλ,μW then

  • 17:

            Sλ,μALG(k,(WAλ,μ)Bλ,μ)

  • 18:

        if m=2 and ‘SWFDM1’ is used then

  • 19:

            if |Sλ,μ|=k|Sλ,μXi|<ki then

  • 20:

               if Aλ,μ(i)W then

  • 21:

                   Sλ,μ(i)ALG(ki,Aλ,μ(i)Bλ,μ(i))

  • 22:

               else if Bλ,μ(i)W then

  • 23:

                   Sλ,μ(i)ALG(ki,(WAλ,μ(i))Bλ,μ(i))

  • 24:

               Run Lines 10–15 of Algorithm 2 using Sλ,μ and Sλ,μ(i) as input to find a fair solution Sλ,μ

  • 25:

        else ifSWFDM2’ is used then

  • 26:

            for i[m] do

  • 17:

               if Aλ,μ(i)W then

  • 28:

                   Sλ,μ(i)ALG(k,Aλ,μ(i)Bλ,μ(i))

  • 29:

               else if Bλ,μ(i)W then

  • 30:

                   Sλ,μ(i)ALG(k,(WAλ,μ(i))Bλ,μ(i))

  • 31:

            Run Lines 10–17 (with d(x,y)<ξμm+1 in Line 13) of Algorithm 3 using Sλ,μ and Sall=i=1mSλ,μ(i)Sλ,μ as input to find a fair solution Sλ,μ

  • 32:

    return Sarg maxλΛ,μU:|Sλ,μ|=kdiv(Sλ,μ)

The post-processing steps of both algorithms for the window W containing the last w elements in X are shown in Lines 12–31. Note that these steps can be trivially used for any window W(T) based on the intermediate candidate solutions at time T. It first computes an unconstrained solution Sλ,μ for each λΛ and μU from the (unconstrained) candidates kept during stream processing based on Algorithm 5. For SWFDM1, it next checks whether Sλ,μ has contained k elements and an under-filled group exists in Sλ,μ. If |Sλ,μ|<k, the post-processing procedure is skipped because Sλ,μ cannot produce any valid solution. Moreover, if |Sλ,μ|=k and has already satisfied the fairness constraint, the post-processing will not be required anymore. Otherwise, it computes a group-specific solution Sλ,μ(iu) of size kiu from the candidates maintained for the under-filled group iu and performs the procedure as Lines 10–15 of Algorithm 2 to greedily swap the elements from Sλ,μ(iu) into Sλ,μ and the elements from the over-filled group io out of Sλ,μ so that Sλ,μ becomes a fair solution. For SWFDM2, it computes a group-specific solution Sλ,μ(i) of size k from each group-specific candidate for i[m]. Sλ,μ as well as each Sλ,μ(i) constitutes Sall for post-processing. Then, using the same method as Algorithm 3, it picks a subset Sλ,μ of Sλ,μ, divides Sall into clusters, and augments Sλ,μ via matroid intersection as the new solution Sλ,μ. Both algorithms return the fair solution with maximum diversity after post-processing as the final solution for FDM on the window W.

  • Theoretical Analysis: Subsequently, we will analyze the theoretical soundness and complexities of the extended SWFDM1 and SWFDM2 algorithms for FDM in the sliding-window model by generalizing the analyses for SFDM1 and SFDM2 in Section 4.

Theorem 7.

SWFDM1 is a ξε10-approximation algorithm for FDM in the sliding-window model when a ξ-approximation algorithm is used for post-processing. It keeps O(klog2Δε2) elements, takes O(klog2Δε2) time per element in streaming processing and O(k2log2Δε2) time for post-processing.

Proof. 

First, based on the analyses in [14], when μOPT[W]5, there exists λΛ such that div(Sλ,μ)ξμ. Let μ be the value in U such that μ[(1ε)·OPTf[W]5,OPTf[W]5], where OPTf[W] is the optimal diversity for FDM on window W. Obviously, OPTf[W]OPT[W]. Accordingly, we can find the values λΛ and μU with div(Sλ,μ)ξμ. Then, Lemma 2 guarantees that div(Sλ,μ)ξμ2 after the post-processing procedure. Combining the above results, we have div(S)div(Sλ,μ)(1ε)ξ10·OPTf[W], where S is the solution for FDM on W returned by SWFDM1. Finally, since the number of candidates increases from O(logΔε) to O(log2Δε2) and the complexities of the remaining steps are not changed, the time and space complexities of SWFDM1 grow by a factor of logΔε compared with SFDM1. □

Theorem 8.

SWFDM2 is a ξε15m+10-approximation algorithm for FDM in the sliding-window model when a ξ-approximation algorithm is used for post-processing. It keeps O(kmlog2Δε2) elements in memory, and takes O(klog2Δε2) time per element in streaming processing and Ok2mlog2Δε2·(m+log2k) time for post-processing.

Proof. 

Similar to the proof of Theorem 7, we find the values λΛ and μU such that μ[(1ε)·OPTf[W]5,OPTf[W]5] and div(Sλ,μ)ξμ, where OPTf[W] is the optimal diversity value for FDM on W. Then, Lemmas 3 and 4 guarantee that div(Sλ,μ)ξμ3m+2 after the post-processing procedure. Combining the above results, we have div(S)div(Sλ,μ)(1ε)ξ15m+10·OPTf[W], where S is the solution for FDM on W returned by SWFDM2. Since the number of candidates increases from O(logΔε) to O(log2Δε2) and the complexities of the remaining steps are not changed, the time and space complexities of SWFDM2 grows by a factor of logΔε compared with SFDM2. □

Finally, since the approximation factor ξ of the algorithm ALG we use is Θ(1), e.g., ξ=12 for GMM [39], the approximation factors of SWFDM1 and SWFDM2 are written as Θ(1) and Θ(m1), respectively, for simplicity.

6. Experiments

In this section, we evaluate the performance of our proposed algorithms on several real-world and synthetic datasets. We first introduce our experimental setup in Section 6.1. Then, experimental results in the streaming setting are presented in Section 6.2. Finally, experimental results in the sliding-window setting are presented in Section 6.3.

6.1. Experimental Setup

  • Datasets: Our experiments are conducted on four publicly available real-world datasets, as follows:

  • Adult (https://archive.ics.uci.edu/dataset/2/adult, accessed on 12 July 2023) is a collection of 48,842 records from the 1994 US Census database. We select six numeric attributes as features and normalize each of them to have zero mean and unit standard deviation. The Euclidean distance is used as the distance metric. The groups are generated from two demographic attributes: sex and race. By using them individually and in combination, there are two (sex), five (race), and ten (sex + race) groups, respectively.

  • CelebA (https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html, accessed on 12 July 2023) is a set of 202,599 images of human faces. We use 41 pre-trained class labels as features and the Manhattan distance as the distance metric. We generate two groups from sex {‘female’, ‘male’}, two groups from age {‘young’, ‘not young’}, and four groups from their combination, respectively.

  • Census (https://archive.ics.uci.edu/dataset/116/us+census+data+1990, accessed on 12 July 2023) is a set of 2,426,116 records from the 1990 US Census data. We take 25 (normalized) numeric attributes as features and use the Manhattan distance as the distance metric. We generate 2, 7, and 14 groups from sex, age, and both of them, respectively.

  • Lyrics (http://millionsongdataset.com/musixmatch, accessed on 12 July 2023) is a set of 122,448 documents, each of which is the lyrics of a song. We train a topic model with 50 topics using LDA [45] implemented in Gensim (https://radimrehurek.com/gensim, accessed on 12 July 2023). Each document is represented as a 50-dimensional vector and the angular distance is used as the distance metric. We generate 15 groups based on the primary genres of songs.

We also generate different synthetic datasets with varying n and m for scalability tests. In each synthetic dataset, we generate ten two-dimensional Gaussian isotropic blobs with random centers in [10,10]2 and identity covariance matrices. We assign points to groups uniformly at random. The Euclidean distance is used as the distance metric. The number n of points varies from 103 to 107 with fixed m=2 or 10. The number m of groups varies from 2 to 20 with fixed n=105. The statistics of all datasets are summarized in Table 1.

Table 1.

Statistics of datasets used in the experiments.

Dataset Group n m # Features Distance Function
Adult Sex 48,842 2 6 Euclidean
Race 5
S + R 10
CelebA Sex 202,599 2 41 Manhattan
Age 2
S + A 4
Census Sex 2,426,116 2 25 Manhattan
Age 7
S + A 14
Lyrics Genre 122,448 15 50 Angular
Synthetic - 103107 2–20 2 Euclidean
  • Algorithms: We compare our streaming algorithms, i.e., SFDM1 and SFDM2, and sliding-window algorithms, i.e., SWFDM1 and SWFDM2, with four existing offline FDM algorithms: the 13m1-approximation FairFlow algorithm for an arbitrary m, the 15-approximation FairGMM algorithm for small k and m, and the 14-approximation FairSwap algorithm specific for m=2 in [17], and the 1εm+1-approximation FairGreedyFlow algorithm for an arbitrary m in [20]. Since no implementation for the algorithms in [17,20] is available, we implement them by ourselves, following the description in the original paper. All the algorithms are implemented in Python 3. All experiments were run on a desktop with an Intel® Core i5-9500 3.0GHz processor and 32GB RAM running Ubuntu 20.04.3 LTS. Each algorithm was run on a single thread.

For a given solution size k, the group-specific size constraint ki for each group i[m] is set based on equal representation, which has been widely used in the literature [21,22,23,27]: If k is divisible by m, ki=km for each i[m]. If k is not divisible by m, ki=km for some groups or km for the others while ensuring i=1mki=k. We also compare the performance of different algorithms for proportional representation [8,26,27], another popular notion of fairness that requires a proportion of elements from each group in the solution that are generally preserved in the dataset.

  • Performance Metrics: The performance of each algorithm is evaluated in terms of efficiency, quality, and space usage. The efficiency is measured as average update time, i.e., the average wall-clock time used to compute a solution for each arrival element in the stream. The quality is measured by the value of the diversity function of the solution returned by an algorithm. Since computing the optimal diversity OPTf of FDM is infeasible, we run GMM [39] for unconstrained diversity maximization to estimate an upper bound of OPTf for comparison. Space usage is measured by the number of distinct elements stored by each algorithm. However, only the numbers of elements stored by our proposed algorithms are presented because offline algorithms should keep all elements in memory for random access, and thus their space usage is always equal to the dataset (or window) size. We run each experiment 10 times with different permutations of the same dataset and report the average of each measure over 10 runs.

6.2. Results in Streaming Setting

  • Effect of Parameter ε:Figure 6 illustrates the performance of SFDM1 and SFDM2 with different values of ε when k=20. We range the value of ε from 0.05 to 0.25 on Adult, CelebA, and Census and from 0.02 to 0.1 on Lyrics. Since the angular distances between any two vectors are at most π2, larger values of ε (e.g., >0.1) leads to greater estimation errors for OPTf and thus significantly lower solution quality in Lyrics. Generally, SFDM1 has higher efficiency and smaller space usage than SFDM2 for different values of ε, but SFDM2 exhibits a better solution quality. Furthermore, the running time and numbers of stored elements of both algorithms significantly decrease when the value of ε increases. This is consistent with our analyses in Section 4 because the number of guesses for OPTf, and thus the number of candidates maintained by both algorithms is O(logΔε). A slightly surprising result is that the diversity values of the solutions do not obviously degrade, even when ε=0.25. This can be explained by the fact that both algorithms return the best solutions after post-processing among all candidates, which means that they provide good solutions as long as there is some μU close to OPTf. We infer that μ still exists when ε=0.25. Nevertheless, we note that the chance of finding an appropriate value of μ will be smaller when the value of ε is larger, which will result in a less stable solution quality. Therefore, in the experiments for streaming setting, we always use ε=0.1 for both algorithms on all datasets, except Lyrics, where the value of ε is set to 0.05. The impact of ε on the performance of SWFDM1 and SWFDM2 is generally similar to that of SFDM1 and SFDM2. However, since the number of candidate solutions is quadratic with respect to ε, we use a larger ε=0.25 for SWFDM1 and SWFDM2 on all datasets except Lyrics, where the value of ε is set to 0.1.

Figure 6.

Figure 6

Performance of SFDM1 and SFDM2 with varying parameter ε on (a) Adult (Sex, m=2), (b) CelebA (Sex, m=2), (c) Census (Sex, m=2), and (d) Lyrics (Genre, m=15) when k=20.

  • Overview:Table 2 presents the performance of different algorithms for FDM in the streaming setting on four real-world datasets with different group partitions when the solution size k is fixed to 20. FairGMM is not included because it needs to enumerate, at most, kmk=O(em)k candidates for solution computation and cannot scale to k>10 and m>5. First, compared with the unconstrained solution returned by GMM, all the fair solutions are less diverse because of the additional fairness constraints. Since GMM is a 12-approximation algorithm and OPTOPTf, 2·div_GMM is the upper bound of OPTf, from which we observe that all five fair algorithms return solutions with much better approximation ratios than their lower bounds. In case of m=2, SFDM1 runs the fastest among all five algorithms, which achieves a speed-up of from two to four orders of magnitude over FairSwap, FairFlow and FairGreedyFlow. At the same time, its solution quality is close to or equal to that of FairSwap in most cases. SFDM2 shows lower efficiency than SFDM1 due to the higher cost of post-processing. However, it is still much more efficient than offline algorithms by taking advantage of stream processing. In addition, the solution quality of SFDM2 benefits from the greedy selection procedure in Algorithm 4, which is not only consistently better than that of SFDM1 but also better than that of FairSwap on the Adult and Census datasets. In the case of m>2, SFDM1 and FairSwap are not applicable anymore. In addition, FairGreedyFlow cannot finish within one day on the Census dataset, and the corresponding results are also ignored. SFDM2 shows significant advantages compared to FairFlow and FairGreedyFlow in terms of both solution quality and efficiency. It provides up to 3.4 times more diverse solutions than FairFlow and FairGreedyFlow while running several orders of magnitude faster. In terms of space usage, both SFDM1 and SFDM2 store very small portions of elements (<0.1% on Census) on all datasets. SFDM2 contains slightly more elements than SFDM1 because the capacity of each group-specific candidate for group i is set to k instead of ki. For SFDM2, the number of stored elements increases nearly linearly with m since the number of candidates is linear to m.

Table 2.

Overview of the performance of different algorithms in the streaming setting (k=20).

Dataset Group GMM FairSwap FairFlow FairGreedyFlow SFDM1 SFDM2
Diversity Diversity Time (s) Diversity Time (s) Diversity Time (s) Diversity Time (s) #Elem Diversity Time (s) #Elem
Adult Sex 5.0226 4.1485 7.06 3.1190 5.45 2.1315 59.99 3.9427 0.0256 90.2 4.1710 0.0965 120.4
Race - - 1.3702 5.82 1.1681 58.59 - - - 3.1373 1.0175 312.3
S + R - - 1.0049 6.55 0.8490 60.92 - - - 2.9182 3.0914 620.6
CelebA Sex 13.0 11.4 35.13 8.4 22.97 5.0 705.4 9.8 0.0188 87.2 10.9 0.0410 122.3
Age 11.4 31.69 7.2 23.06 5.0 657.4 10.4 0.0225 94.6 10.8 0.0591 128.0
S + A - - 6.3 24.17 3.5 312.8 - - - 10.4 0.1124 193.1
Census Sex 35.0 27.0 372.3 17.5 254.6 - - 27.0 0.0321 121.5 31.0 0.0931 163.0
Age - - 8.5 294.0 - - - - - 21.0 0.8671 676.0
S + A - - 5.0 347.5 - - - - - 19.0 3.7539 1276.0
Lyrics Genre 1.5476 - - 0.4244 14.84 0.6732 302.6 - - - 1.4463 2.6785 677.2
  • Effect of Solution Size k: The impact of solution size k on the performance of different algorithms in the streaming setting is shown in Figure 7 and Figure 8. Here, we vary k in [5,50] when m5, or [10,50] when 5<m10, or [15,50] when m>10, since we restrict that an algorithm must pick at least one element from each group. For each algorithm, the diversity value drops with k as the diversity function is monotonically non-increasing. At the same time, the update time grows with k, as their time complexities are linear or quadratic w.r.t. k. Compared with the solutions of GMM, all fair solutions are slightly less diverse when m=2. The gaps in diversity values become more apparent when m is larger. Although FairGMM achieves a slightly higher solution quality than any other algorithm when k10 and m=2, it is not scalable to a larger k and m due to the enormous cost of enumeration. The solution quality of FairSwap, SFDM1, and SFDM2 is close to each other when m=2, which is better than that of FairFlow and FairGreedyFlow. However, the efficiencies of SFDM1 and SFDM2 are orders of magnitude higher than those of offline algorithms. Furthermore, when m>2, SFDM2 outperforms FairFlow and FairGreedyFlow in terms of both efficiency and effectiveness across all k values. However, since the time complexity of SFDM2 is quadratic w.r.t. both k and m, its update time increases drastically with k and might be close to that of FairFlow when k and m are large.

Figure 7.

Figure 7

Solution quality of different algorithms in the streaming setting with varying solution sizes, k. The diversity values of GMM are plotted as gray lines to illustrate the “price of fairness”, i.e., the losses in diversity caused by incorporating the fairness constraints.

Figure 8.

Figure 8

Update time of different algorithms in the streaming setting with varying solution sizes, k.

  • Scalability: We evaluate the scalability of each algorithm in the streaming setting on synthetic datasets by varying the dataset size n from 103 to 107 and the number of groups m from 2 to 20. The results regarding solution quality and update time for different values of n and m when k=20 are presented in Figure 9. First of all, SFDM2 shows a much better scalability than FairFlow and FairGreedyFlow w.r.t. m in terms of solution quality. The diversity value of the solution of SFDM2 only slightly decreases with m and is up to 3 times higher than that of FairFlow and FairGreedyFlow when m>10. However, its update time increases more rapidly with m due to the quadratic dependence on m. Moreover, the diversity values of different algorithms slightly grow with n but are always close to each other for different values of n when m=2. Finally, the running time of offline algorithms is linear to n. However, the update time of SFDM1 and SFDM2 are almost independent of n, as analyzed in Section 4.

Figure 9.

Figure 9

Solution quality and update time on synthetic datasets in the streaming setting with varying dataset sizes, n, and numbers of groups, m (k=20).

  • Equal vs. Proportional Representation:Figure 10 compares the solution quality and running time of different algorithms for two popular notions of fairness, i.e., equal representation (ER) and proportional representation (PR), when k=20 on Adult with highly skewed groups, where 67% of the records are for males and 87% of the records are for Whites. The diversity value of the solution of each algorithm is slightly higher for PR than ER, as the solution for PR is closer to the unconstrained one. The running time of SFDM1 and SFDM2 is slightly shorter for PR than ER, since fewer swapping and augmentation steps are performed on each candidate during post-processing. The results for SWFDM1 and SWFDM2 are similar and will be omitted.

Figure 10.

Figure 10

Comparison of different algorithms on Adult for equal representation (ER) and proportional representation (PR) when k=20.

6.3. Results in Sliding-Window Setting

  • Overview:Table 3 shows the performance of different algorithms for sliding-window FDM on four real-world datasets with different group settings when the solution size k is fixed to 20 and the window size w is set to 25 k on Adult (as its size is smaller than 100 k) or 100 k on other datasets. FairGMM is also omitted in Table 3 due to its high complexity. Compared with the streaming setting, the “price of fairness” becomes higher in the sliding-window setting, for two possible reasons. First, the approximation factors of our proposed algorithms are lower. Second, some minor groups contain too few elements in the window when the value of m is large (marginally larger than ki). Thus, the selection of elements from such groups is very restricted to ensure fairness. Nevertheless, we still find that all fair algorithms provide solutions with much better approximations than their lower bounds.

Table 3.

Overview of the performance of different algorithms in the sliding-window setting (k=20; w=25 k for Adult and w=100 k for other datasets).

Dataset Group GMM FairSwap FairFlow FairGreedyFlow SWFDM1 SWFDM2
Diversity Diversity Time (s) Diversity Time (s) Diversity Time (s) Diversity Time (s) #Elem Diversity Time (s) #Elem
Adult Sex 4.9598 4.0568 6.11 3.0660 5.47 2.0501 13.95 3.5445 0.870 431.5 3.4052 0.489 551.8
Race - - 1.2364 5.87 0.4162 7.71 - - - 2.5212 1.056 616.1
S + R - - 0.9105 6.43 0.3276 4.15 - - - 1.7843 1.027 799.6
CelebA Sex 12.0 11.3 28.12 7.7 23.22 6.0 138.4 9.7 2.526 427.7 8.7 0.875 560.6
Age 10.6 26.99 7.3 23.11 6.0 121.5 10.5 2.966 418.5 8.7 0.976 537.2
S + A - - 6.2 24.10 4.0 112.1 - - - 8.8 2.119 864.1
Census Sex 32.0 30.0 26.33 18.0 21.49 11.0 109.9 29.0 2.593 377.0 28.0 1.614 397.0
Age - - 5.0 22.49 2.0 76.15 - - - 13.0 2.568 646.0
S + A - - 2.0 24.11 5.0 128.4 - - - 13.0 3.077 796.0
Lyrics Genre 1.5586 - - 0.2522 20.12 0.6432 133.4 - - - 1.2166 4.694 1132.4

We observe that SWFDM2 runs the fastest of all five algorithms, which achieves 5–150× speedups over FairSwap, FairFlow and FairGreedyFlow. Moreover, SWFDM1 and SWFDM2 have a slightly lower solution quality than FairSwap when m=2. Nevertheless, SWFDM2 shows significant advantages over FairFlow and FairGreedyFlow in terms of both solution quality and efficiency when m>2. Unlike the streaming setting, SWFDM2 shows higher efficiency than SWFDM1. This is because SWFDM2 maintains group-specific solutions with size constraints k, instead of ki for SWFDM1, in stream processing. Consequently, its group-specific solutions often expire (i.e., Aλ,μ(i)W), and thus are not eligible for post-processing. However, such efficiency improvements come at the expense of less diverse solutions. In terms of space usage, both SWFDM1 and SWFDM2 store very small portions of elements (at most 3.2%·w) across all datasets. SWFDM2 keeps slightly more elements than SWFDM1 also because the capacity of each group-specific solution is k instead of ki.

  • Effect of Solution Size k: The impact of solution size k on the performance of different algorithms in the sliding-window setting is illustrated in Figure 11 and Figure 12. We use the same values of k as in the streaming setting. The window size w is set to w=25 k for Adult and 100 k for others. For each algorithm, the diversity value drops with k as the diversity function is monotonically non-increasing. At the same time, the update time grows with k as their time complexities are linear or quadratic w.r.t. k. The gaps in diversity values between unconstrained and fair solutions are much larger than those in the streaming setting. The reasons for this were explained in the previous paragraph. The solution quality of SWFDM1 and SWFDM2 is slightly lower than FairSwap when m=2, but is still better than that of FairFlow and FairGreedyFlow. However, their efficiencies are always much higher than those of offline algorithms. Finally, when m>2, SWFDM2 outperforms FairFlow and FairGreedyFlow in terms of efficiency and effectiveness across all k values.

Figure 11.

Figure 11

Solution quality of different algorithms in the sliding-window setting with varying solution size k (w=25 k for adult and 100 k for others). The diversity values of GMM are also plotted as gray lines to illustrate the “price of fairness”.

Figure 12.

Figure 12

Update time of different algorithms in the sliding-window setting with varying solution size k (w=25 k for Adult and 100 k for others).

  • Scalability: We evaluate the scalability of each algorithm in the sliding-window setting on synthetic datasets by varying the number of groups m from 2 to 20 and the window size w from 103 to 106. The results regarding solution quality and update time for different values of w and m when k=20 are presented in Figure 13. First of all, SWFDM2 shows a much better scalability than FairFlow and FairGreedyFlow w.r.t. m in terms of solution quality. The diversity value of the solution of SWFDM2 only slightly decreases with m. However, for FairFlow and FairGreedyFlow, the diversity values drop drastically with m. Nevertheless, its update time increases more rapidly with m since its time complexity is quadratic w.r.t. m. Furthermore, the results of the diversity values of different algorithms with varying w are similar to those for varying k. As expected, the running time of offline algorithms is nearly linear to w. However, unlike the streaming setting, the update time of SWFDM1 and SWFDM2 increases with w because more candidates are non-expired and thus considered in post-processing for a larger value of w.

Figure 13.

Figure 13

Solution quality and update time on synthetic datasets in the sliding-window setting with varying window size w and number of groups m (k=20).

7. Conclusions

In this paper, we studied the diversity maximization problem with fairness constraints in the streaming and sliding window settings. We first proposed a 1ε4-approximation streaming algorithm for this problem when there were two groups in the dataset and a 1ε3m+2-approximation streaming algorithm that could deal with an arbitrary number m of groups. Moreover, we extended the two proposed streaming algorithms to the sliding-window model while maintaining approximation factors of Θ(1) and Θ(m1), respectively. Extensive experiments on real-world and synthetic datasets confirmed the efficiency, effectiveness, and scalability of our proposed algorithms.

In future work, we would like to improve the approximation ratios of the proposed algorithms. It would also be interesting to consider diversity maximization problems with other objective functions and fairness constraints defined on multiple sensitive attributes.

Author Contributions

Conceptualization, Y.W. and M.M.; methodology, Y.W.; software, Y.W., F.F. and J.L.; validation, M.M.; data curation, F.F. and J.L.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W., F.F. and M.M.; visualization, Y.W. and J.L.; supervision, Y.W. and M.M.; funding acquisition, Y.W. and M.M. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Real-world datasets that we use in our experiments are publicly available. The code for generating synthetic data and for our experiments is available at https://github.com/yhwang1990/code-FDM (accessed on 12 July 2023).

Conflicts of Interest

The authors declare no conflict of interest.

Funding Statement

This research was funded by the MLDB project of the Academy of Finland (decision number: 322046) and the National Natural Science Foundation of China under grant number 62202169.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Ahmed M. Data summarization: A survey. Knowl. Inf. Syst. 2019;58:249–273. doi: 10.1007/s10115-018-1183-0. [DOI] [Google Scholar]
  • 2.Qin L., Yu J.X., Chang L. Diversifying Top-K Results. Proc. VLDB Endow. 2012;5:1124–1135. doi: 10.14778/2350229.2350233. [DOI] [Google Scholar]
  • 3.Zheng K., Wang H., Qi Z., Li J., Gao H. A survey of query result diversification. Knowl. Inf. Syst. 2017;51:1–36. doi: 10.1007/s10115-016-0990-4. [DOI] [Google Scholar]
  • 4.Gollapudi S., Sharma A. An axiomatic approach for result diversification; Proceedings of the 18th International Conference on World Wide Web; Madrid, Spain. 20–24 April 2009; pp. 381–390. [Google Scholar]
  • 5.Rafiei D., Bharat K., Shukla A. Diversifying web search results; Proceedings of the 19th International Conference on World Wide Web; Raleigh, NC, USA. 26–30 April 2010; pp. 781–790. [Google Scholar]
  • 6.Kunaver M., Pozrl T. Diversity in recommender systems—A survey. Knowl. Based Syst. 2017;123:154–162. doi: 10.1016/j.knosys.2017.02.009. [DOI] [Google Scholar]
  • 7.Zadeh S.A., Ghadiri M., Mirrokni V.S., Zadimoghaddam M. Scalable Feature Selection via Distributed Diversity Maximization; Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence; San Francisco, CA, USA. 4–9 February 2017; pp. 2876–2883. [Google Scholar]
  • 8.Celis L.E., Keswani V., Straszak D., Deshpande A., Kathuria T., Vishnoi N.K. Fair and Diverse DPP-Based Data Summarization; Proceedings of the 35th International Conference on Machine Learning; Stockholm, Sweden. 10–15 July 2018; pp. 715–724. [Google Scholar]
  • 9.Borodin A., Lee H.C., Ye Y. Max-Sum diversification, monotone submodular functions and dynamic updates; Proceedings of the 31st ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems; Scottsdale, AZ, USA. 21–23 May 2012; pp. 155–166. [Google Scholar]
  • 10.Drosou M., Pitoura E. DisC diversity: Result diversification based on dissimilarity and coverage. Proc. VLDB Endow. 2012;6:13–24. doi: 10.14778/2428536.2428538. [DOI] [Google Scholar]
  • 11.Abbassi Z., Mirrokni V.S., Thakur M. Diversity maximization under matroid constraints; Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; Chicago, IL, USA. 11–14 August 2013; pp. 32–40. [Google Scholar]
  • 12.Indyk P., Mahabadi S., Mahdian M., Mirrokni V.S. Composable core-sets for diversity and coverage maximization; Proceedings of the 33rd ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems; Dallas, TX, USA. 15–17 May 2014; pp. 100–108. [Google Scholar]
  • 13.Ceccarello M., Pietracaprina A., Pucci G. Fast Coreset-based Diversity Maximization under Matroid Constraints; Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, Marina Del Rey; CA, USA. 5–9 February 2018; pp. 81–89. [Google Scholar]
  • 14.Borassi M., Epasto A., Lattanzi S., Vassilvitskii S., Zadimoghaddam M. Better Sliding Window Algorithms to Maximize Subadditive and Diversity Objectives; Proceedings of the 38th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems; Amsterdam, The Netherlands. 1–3 July 2019; pp. 254–268. [Google Scholar]
  • 15.Bauckhage C., Sifa R., Wrobel S. Adiabatic Quantum Computing for Max-Sum Diversification; Proceedings of the 2020 SIAM International Conference on Data Mining; Cincinnati, OH, USA. 7–9 May 2020; pp. 343–351. [Google Scholar]
  • 16.Ceccarello M., Pietracaprina A., Pucci G., Upfal E. MapReduce and Streaming Algorithms for Diversity Maximization in Metric Spaces of Bounded Doubling Dimension. Proc. VLDB Endow. 2017;10:469–480. doi: 10.14778/3055540.3055541. [DOI] [Google Scholar]
  • 17.Moumoulidou Z., McGregor A., Meliou A. Diverse Data Selection under Fairness Constraints; Proceedings of the 24th International Conference on Database Theory; Nicosia, Cyprus. 23–26 March 2021; pp. 13:1–13:25. [Google Scholar]
  • 18.Drosou M., Pitoura E. Diverse Set Selection Over Dynamic Data. IEEE Trans. Knowl. Data Eng. 2014;26:1102–1116. doi: 10.1109/TKDE.2013.44. [DOI] [Google Scholar]
  • 19.Zhang G., Gionis A. Maximizing diversity over clustered data; Proceedings of the 2020 SIAM International Conference on Data Mining; Cincinnati, OH, USA. 7–9 May 2020; pp. 649–657. [Google Scholar]
  • 20.Addanki R., McGregor A., Meliou A., Moumoulidou Z. Improved Approximation and Scalability for Fair Max-Min Diversification; Proceedings of the 25th International Conference on Database Theory; Online. 29 March–1 April 2022; pp. 7:1–7:21. [Google Scholar]
  • 21.Chiplunkar A., Kale S., Ramamoorthy S.N. How to Solve Fair k-Center in Massive Data Models; Proceedings of the 37th International Conference on Machine Learning; Virtual. 13–18 July 2020; pp. 1877–1886. [Google Scholar]
  • 22.Jones M., Nguyen H., Nguyen T. Fair k-Centers via Maximum Matching; Proceedings of the 37th International Conference on Machine Learning; Virtual. 13–18 July 2020; pp. 4940–4949. [Google Scholar]
  • 23.Kleindessner M., Awasthi P., Morgenstern J. Fair k-Center Clustering for Data Summarization; Proceedings of the 36th International Conference on Machine Learning; Long Beach, CA, USA. 9–15 June 2019; pp. 3448–3457. [Google Scholar]
  • 24.Schmidt M., Schwiegelshohn C., Sohler C. Fair Coresets and Streaming Algorithms for Fair k-means; Proceedings of the 17th International Workshop on Approximation and Online Algorithms; Munich, Germany. 12–13 September 2019; pp. 232–251. [Google Scholar]
  • 25.Huang L., Jiang S.H., Vishnoi N.K. Coresets for Clustering with Fairness Constraints. Adv. Neural Inf. Process. Syst. 2019;32:7587–7598. [Google Scholar]
  • 26.El Halabi M., Mitrović S., Norouzi-Fard A., Tardos J., Tarnawski J.M. Fairness in Streaming Submodular Maximization: Algorithms and Hardness. Adv. Neural Inf. Process. Syst. 2020;33:13609–13622. [Google Scholar]
  • 27.Wang Y., Fabbri F., Mathioudakis M. Fair and Representative Subset Selection from Data Streams; Proceedings of the Web Conference 2021; Ljubljana, Slovenia. 19–23 April 2021; pp. 1340–1350. [Google Scholar]
  • 28.Wang Y., Fabbri F., Mathioudakis M. Streaming Algorithms for Diversity Maximization with Fairness Constraints; Proceedings of the 38th IEEE International Conference on Data Engineering; Kuala Lumpur, Malaysia. 9–12 May 2022; pp. 41–53. [Google Scholar]
  • 29.Cevallos A., Eisenbrand F., Zenklusen R. Max-Sum Diversity via Convex Programming; Proceedings of the 32nd International Symposium on Computational Geometry; Boston, MA, USA. 14–18 June 2016; pp. 26:1–26:14. [Google Scholar]
  • 30.Cevallos A., Eisenbrand F., Zenklusen R. Local Search for Max-Sum Diversification; Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms; Barcelona, Spain. 16–19 January 2017; pp. 130–142. [Google Scholar]
  • 31.Aghamolaei S., Farhadi M., Zarrabi-Zadeh H. Diversity Maximization via Composable Coresets; Proceedings of the 27th Canadian Conference on Computational Geometry; Kingston, ON, Canada. 10–12 August 2015; pp. 38–48. [Google Scholar]
  • 32.Chandra B., Halldórsson M.M. Approximation Algorithms for Dispersion Problems. J. Algorithms. 2001;38:438–465. doi: 10.1006/jagm.2000.1145. [DOI] [Google Scholar]
  • 33.Erkut E. The discrete p-dispersion problem. Eur. J. Oper. Res. 1990;46:48–60. doi: 10.1016/0377-2217(90)90297-O. [DOI] [Google Scholar]
  • 34.Ravi S.S., Rosenkrantz D.J., Tayi G.K. Heuristic and Special Case Algorithms for Dispersion Problems. Oper. Res. 1994;42:299–310. doi: 10.1287/opre.42.2.299. [DOI] [Google Scholar]
  • 35.Hassin R., Rubinstein S., Tamir A. Approximation algorithms for maximum dispersion. Oper. Res. Lett. 1997;21:133–137. doi: 10.1016/S0167-6377(97)00034-5. [DOI] [Google Scholar]
  • 36.Epasto A., Mahdian M., Mirrokni V., Zhong P. Improved Sliding Window Algorithms for Clustering and Coverage via Bucketing-Based Sketches; Proceedings of the 2022 ACM-SIAM Symposium on Discrete Algorithms; Virtual. 9–12 January 2022; pp. 3005–3042. [Google Scholar]
  • 37.Bhaskara A., Ghadiri M., Mirrokni V.S., Svensson O. Linear Relaxations for Finding Diverse Elements in Metric Spaces. Adv. Neural Inf. Process. Syst. 2016;29:4098–4106. [Google Scholar]
  • 38.Wang Y., Mathioudakis M., Li J., Fabbri F. Max-Min Diversification with Fairness Constraints: Exact and Approximation Algorithms; Proceedings of the 2023 SIAM International Conference on Data Mining (SDM); Minneapolis, MI, USA. 27–29 April 2023; pp. 91–99. [Google Scholar]
  • 39.Gonzalez T.F. Clustering to Minimize the Maximum Intercluster Distance. Theor. Comput. Sci. 1985;38:293–306. doi: 10.1016/0304-3975(85)90224-5. [DOI] [Google Scholar]
  • 40.Korte B., Vygen J. Combinatorial Optimization: Theory and Algorithms. Springer; Berlin/Heidelberg, Germany: 2012. [Google Scholar]
  • 41.Cunningham W.H. Improved Bounds for Matroid Partition and Intersection Algorithms. SIAM J. Comput. 1986;15:948–957. doi: 10.1137/0215066. [DOI] [Google Scholar]
  • 42.Chakrabarty D., Lee Y.T., Sidford A., Singla S., Wong S.C. Faster Matroid Intersection; Proceedings of the 60th IEEE Annual Symposium on Foundations of Computer Science; Baltimore, MD, USA. 9–12 November 2019; pp. 1146–1168. [Google Scholar]
  • 43.Nguyen H.L. A note on Cunningham’s algorithm for matroid intersection. arXiv. 20191904.04129 [Google Scholar]
  • 44.Chen D.Z., Li J., Liang H., Wang H. Matroid and Knapsack Center Problems. Algorithmica. 2016;75:27–52. doi: 10.1007/s00453-015-0010-1. [DOI] [Google Scholar]
  • 45.Blei D.M., Ng A.Y., Jordan M.I. Latent Dirichlet Allocation. J. Mach. Learn. Res. 2003;3:993–1022. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Real-world datasets that we use in our experiments are publicly available. The code for generating synthetic data and for our experiments is available at https://github.com/yhwang1990/code-FDM (accessed on 12 July 2023).


Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES