Abstract
Integrated Sensing and Communication (ISAC) has emerged as a cornerstone technology for next-generation wireless networks, where accurate performance evaluation is essential. In such systems, the capacity–distortion function provides a fundamental measure of the trade-off between communication and sensing performance, making its computation a problem of significant interest. However, the associated optimization problem is often constrained by non-convexity, which poses considerable challenges for deriving effective solutions. In this paper, we propose extended Arimoto–Blahut (AB) algorithms to solve the non-convex optimization problem associated with the capacity–distortion trade-off in bistatic ISAC systems. Specifically, we introduce auxiliary variables to transform non-convex distortion constraints in the optimization problem into linear constraints, prove that the reformulated linearly constrained optimization problem maintains the same optimal solution as the original problem, and develop extended AB algorithms for both squared error distortion and logarithmic loss distortion. The numerical results validate the effectiveness of the proposed algorithms.
Keywords: integrated sensing and communication, capacity–distortion, Arimoto–Blahut algorithm
1. Introduction
Integrated sensing and communications (ISAC) has emerged as a key enabling technology for future wireless networks (beyond 5G and 6G), attracting extensive research across multiple domains. While much work has focused on wireless communication aspects such as waveform design and beamforming [1,2,3,4,5], a parallel line of information-theoretic research has recently developed to characterize its fundamental performance limits.
From an information-theoretic perspective, the performance of an ISAC system is quantified by its capacity–distortion function. This function is formulated as an optimization problem that maximizes mutual information under a distortion constraint. The authors in [6] examined the monostatic ISAC model and characterized the optimal trade-off between the capacity of reliable communication and the distortion of state estimation as an optimization problem with distortion constraints. A vector Gaussian channel with in-block memory was considered in [7,8], where the subspace trade-off and the random–deterministic trade-off between sensing and communication were identified. The capacity–distortion region of monostatic ISAC when the receiver has imperfect state knowledge was studied in [9]. The bistatic radar system, serving as a complementary paradigm to monostatic configurations, demonstrates superior channel interference mitigation capability due to its spatial diversity in transmitter–receiver separation. The authors in [10] characterized the rate-distortion and rate-detection exponent, respectively, of bistatic ISAC systems, where the sensing receiver estimates or detects the state based on the known sent information. Our previous work [11] considered a bistatic ISAC system in which the sensing receiver is unaware of the sent message. The capacity–distortion trade-off of the system was derived for some degraded channels and formulated as a rate-distortion optimization problem. In [12], a logarithmic loss (log-loss) function was selected to measure the quality of a soft estimate, and the corresponding capacity–distortion function of the bistatic ISAC model and the closed-form solutions for Gaussian channels under some conditions were derived. Therefore, solving the rate-distortion optimization problem is crucial for ISAC systems.
Unlike the tractable capacity–distortion trade-off in monostatic ISAC systems, which features a convex objective function and linear constraints, the corresponding problem in bistatic ISAC systems exhibits non-convexity in both its objective function and constraints. This inherent non-convex structure poses significant challenges to obtaining an efficient solution.
The Arimoto–Blahut (AB) algorithm, developed independently by Arimoto [13] and Blahut [14], is a widely applied method for calculating channel capacity and rate-distortion functions in information theory. To calculate the channel capacity of point-to-point channels, the AB algorithm replaces the conditional probability mass function (pmf) by a free variable and then maximizes the objective function over each variable alternatingly. The authors of [15] expanded the AB algorithm to compute the capacity region of the degraded broadcast channel, which is a non-convex optimization problem. This approach is also applicable for determining the capacity regions of less noisy broadcast channels. Furthermore, the authors of [16,17] developed AB-type algorithms to evaluate the supporting hyperplanes of the superposition coding region and those of the Nair–El Gamal outer bound for general broadcast channels. To calculate the rate-distortion function, Blahut [14] transformed the original distortion and compression rate problem into an unconstrained parameterized problem with respect to multipliers introduced by the distortion constraint. Then, for a fixed multiplier, a distortion and rate pair is calculated based on a framework similar to the AB algorithm above. Finally, the multiplier is traversed to obtain the complete rate-distortion function.
In contrast to the classical rate-distortion problem with its linear constraints, the capacity–distortion trade-off in bistatic ISAC systems is a non-convex optimization problem that creates difficulties for the application of the AB algorithm. This difficulty is compounded by the algorithm’s need to exhaustively search over Lagrangian multipliers, which complicates the direct determination of the achievable rate under a specific distortion constraint.
In this paper, we develop an optimization framework for rate-distortion optimization problems featuring non-convex yet differentiable distortion constraints, as motivated by emerging bistatic ISAC systems. Within this framework, we investigate the capacity–distortion trade-off under two typical distortion metrics: squared error (SE) and log-loss. The main contributions are summarized as follows:
By introducing auxiliary variables, the original non-convex optimization problem is equivalently transformed into a convex-constrained form, with proven consistency in the optimal solutions. This provides a theoretical foundation for subsequent algorithm design.
Based on the reformulated convex-constrained problem and the theory of the classical AB algorithm, an extended AB algorithm is proposed to obtain the optimal solution efficiently. This algorithm not only broadens the applicability of the AB approach but also supplies a feasible computational method for the capacity–distortion function in bistatic ISAC systems.
The proposed algorithm directly determines the achievable rate under a given distortion constraint without exhaustively searching over Lagrangian multipliers, thereby addressing the computational inefficiency and practical limitations inherent in traditional AB-type algorithms for computing rate-distortion functions.
Numerical results are provided to demonstrate the effectiveness of our proposed algorithm.
The rest of the paper is organized as follows. Section 2 introduces the monostatic and bistatic ISAC system models and capacity–distortion trade-off of ISAC systems. Section 3 proposes an extended AB algorithm for optimization problems of bistatic ISAC system with a convergence analysis. Section 4 generalizes the proposed algorithm to solve the lossy source coding problem with side information. Section 5 evaluates the performance of the proposed algorithms through numerical simulations. Section 6 concludes the paper.
Notations: Upper-case letters represent random variables, and lower-case letters represent their realizations. and represent the sets of real numbers and non-negative real numbers, respectively. denotes the tuple of random variables , denotes the expectation for a random variable X, and means that X is normally distributed, with mean and variance . denotes the entropy function, denotes the mutual information function, and denotes the Kullback–Leibler divergence.
2. Problem Formulation
In this section, we introduce the monostatic and bistatic ISAC systems and then present the capacity–distortion trade-off in the ISAC system with SE distortion and log-loss distortion, respectively. As shown in Figure 1, the monostatic ISAC system employs a dual-function transmitter (ISAC Tx) that simultaneously serves as the sensing receiver (SenRx), which is accompanied by a communication receiver (ComRx) [6]. In contrast, the bistatic ISAC system (Figure 2) adopts a distributed architecture with physically separated components: an ISAC Tx, a ComRx, and an independent SenRx. Specifically, the ISAC Tx transmits a codeword to convey information to the ComRx that knows the channel states perfectly. And then, the Tx also acts as a SenRx, estimating the channel state based on the received echo signals in the monostatic ISAC system. By contrast, in the bistatic ISAC system, the SenRx at another location receives both the radiated signals of the ISAC Tx and the reflected signals from the ComRx to perform the sensing task to estimate the channel states. Subsequently, we present the information-theoretic model for bistatic ISAC systems and formally define the capacity–distortion function.
Figure 1.

The monostatic ISAC system model.
Figure 2.

The bistatic ISAC system model.
As shown in Figure 3, the bistatic ISAC system is modeled as a state-dependent memoryless channel (SDMC) with two receivers [11], where the state sequence is independent and identically distributed generated from a given state distribution and is assumed to be perfectly and noncausally available at the ComRx but unavailable at the SenRx. Specifically, the transmitter encodes the message W into a codeword and transmits it over the SDMC with two receivers. After receiving , the ComRx obtains the estimate of the message by combining with . After receiving , the SenRx outputs as the estimation of the state sequence . The performance of the decoder is measured by the average probability of error . The accuracy of the state estimation is measured by the average expected distortion , where is a bounded distortion function.
Figure 3.
Two-receiver SDMC model.
A pair is said to be achievable if there exists a sequence of codes such that . The capacity–distortion function is defined as , and this definition also applies to the monostatic ISAC system. To avoid confusion, we denote the capacity–distortion function of the monostatic ISAC system as . Furthermore, the capacity–distortion function of the ISAC system has various forms for different distortion metrics. In the following, we take SE (SE focuses on the magnitude of prediction error and is well suited for point estimation and quadratic cost problems) and log-loss (log-loss captures the probability consistency between the true and estimated distributions, making it suitable for distribution estimation and likelihood-based inference) distortion as examples to show the specific capacity–distortion function for the monostatic and bistatic ISAC systems.
2.1. ISAC System with SE Distortion
When the distortion metric is selected as SE, i.e., , we have the following capacity–distortion function results for both the monostatic and bistatic ISAC systems.
Lemma 1
(Theorem 1, [6]). The capacity–distortion function for the monostatic ISAC system is the optimal solution to the following optimization problem:
(1) where , and the joint distribution of is given by for some pmf .
As demonstrated, the constraint term in problem (1) is linear with respect to the optimization variable , since
which is independent of . Thus, the constraint term in problem (1) can be rewritten as , where is independent of .
Lemma 2
(Theorem 6, [11]). For the bistatic ISAC system, the capacity–distortion function for the case where forms a Markov chain is the optimal solution to the following optimization problem:
(2) where , and the joint distribution of is given by for some pmf .
It is observed that the objective function in the optimization problem (2) is the sum of two mutual information terms, which is non-convex with respect to the optimization variable . In addition, the constraint term in problem (2) is also non-convex with respect to the optimization variable by observing that
and
which makes the problem (2) difficult to solve via existing methods.
2.2. ISAC System with Log-Loss Distortion
If the distortion metric is log-loss, i.e., , where is the soft estimator of S, the following results hold.
Lemma 3
(Theorem 1, [18]). The capacity–distortion function for the monostatic ISAC system is the optimal solution to the following optimization problem:
(3) where the joint distribution of is given by for some pmf .
Note that the constraint term in problem (3) is also linear with respect to the optimization variable , since the term is independent of .
Lemma 4
(Corollary 2, [12]). For the bistatic ISAC system, the capacity–distortion function for the case where forms a Markov chain is the optimal solution to the following optimization problem:
(4) where the joint distribution of is given by for some pmf .
It is observed that the constraint term in the optimization problem (4) remains non-convex with respect to the optimization variable , which poses a challenge to the problem solution. In addition, note that the Markov chain in Lemma 4 differs from the Markov chain in Lemma 2, which stems from the different settings for the sensing parameters in the bistatic ISAC model in [12] and Figure 2. One of the sensing targets is the ComRx, while the other is a target independent of the ComRx. Nevertheless, there is no fundamental difference between their information theory models. Therefore, to facilitate subsequent comparison with the theoretical results in [12], we adopt the model and theorem results proposed in [12] when the distortion metric is log-loss.
3. Extended AB Algorithm for Bistatic ISAC
Before introducing the extended AB algorithm, we review the classical AB algorithm, which was originally developed to compute the channel capacity of discrete memoryless point-to-point channels. For a discrete memoryless point-to-point channel with input X and output Y, its channel capacity is given by
| (5) |
where is the input distribution. Based on the mutual information formula
consider the expression
where is replaced by a new variable . The following result serves as a starting point of the classical AB algorithm.
Lemma 5
(Theorem 1, [14]). The following properties hold:
- (a)
.
- (b)
For fixed , the function is maximized by .
- (c)
For fixed , the function is maximized by
Based on the above result, the mutual information maximization problem (5) can be transformed into two maximization subproblems, where each subproblem admits a closed-form update. This leads to the development of the AB algorithm, which performs an alternating update of the two subproblems. It is worth noting that the AB algorithm is designed for discrete distributions. For continuous distributions, it is usually necessary to convert them into discrete distributions through quantization before applying the AB algorithm.
In the following, we focus on solving the optimization problems (2) and (4). Recalling the optimization problem for the monostatic ISAC system in Lemmas 1 and 4, where the constraint terms are linear in the optimization variable and the objective function is concave with respect to the optimization variable, the AB algorithm can be applied to derive the result in [6] (Theorem 4), where for each fixed multiplier introduced by the distortion constraint, a rate and distortion pair is derived, and the capacity–distortion function is obtained by traversing the multiplier. However, for the bistatic ISAC system, as previously discussed, the non-convexity of both the constraint terms and the objective function prevents the direct application of the AB algorithm to obtain an analogous algorithm. Fortunately, by introducing a special free variable ( in (10) or in (16)), we can transform this complex optimization problem into a bivariate optimization problem, in which one subproblem is a linearly constrained mutual information maximization problem, which can then be solved using the AB algorithm.
3.1. Algorithm for Optimization Problem (2)
In this subsection, we focus on solving the optimization problem (2). For convenience, we omit the subscripts in the probability distribution and replace with in the following derivation. In addition, all summation symbols without an index represent the sum of all variables in the expression.
Note that the non-convex constraint in the optimization problem constitutes the key obstacle to solving this problem using the existing AB algorithm. To address this issue, we introduce a new variable that transform the non-convex constraint into a linear constraint, and the corresponding optimization problem with the linear constraint is as follows:
| (6) |
For the optimization problem (6), we have the following result.
Theorem 1.
The optimal solution of the original optimization problem (2) is the same as that of the optimization problem (6).
Proof of Theorem 1.
The objective function in the original optimization problem (2) is as follows:
By comparing the original optimization problem (2) with the new optimization problem (6), we observe that the feasible set of the problem (2) is a subset of the feasible set of the problem (6). Therefore, the maximum value of the objective function in the optimization problem (2) is less than or equal to that in the problem (6). On the other hand, the Lagrangian function corresponding to the optimization problem (6) is
where and are multipliers introduced for the constraints. Denote the optimal solution of the optimization problem (6) by . According to the Karush–Kuhn–Tucker (KKT) condition, we have . Thus, we get
i.e., . Given that the estimator satisfies , it follows that is also a feasible solution to the original optimization problem (2), thereby completing the proof. □
In the following, we proceed to solve optimization problem (6). Building upon the derivation of Theorem 1, we obtain the optimal form of the variable . For fixed , the problem reduces to a mutual information maximization with linear constraints, which can be reformulated as a bilevel optimization problem by applying the framework of the AB algorithm. Specifically, we define
| (7) |
which is a concave function of the optimization variable for fixed and . Furthermore, we obtain that
where the equality holds when and . This means that . Therefore, the original optimization problem (2) is equivalent to the following optimization problem:
| (8) |
where the optimal satisfies
| (9) |
and the optimal satisfies
| (10) |
It is observed from (10) that the optimal value of auxiliary variable corresponds to the minimum mean SE (MMSE) estimator of the given distribution . Then, we derive the optimal variable p of problem (8). The Lagrangian function corresponding to the optimization problem (8) is
| (11) |
where and are multipliers introduced for the constraints. By setting the gradient of (11) with respect to the optimization variable to zero, we get
| (12) |
where
and . Furthermore, since the capacity–distortion function of the system is monotonically non-decreasing with respect to the distortion D [11] (Lemma 1), the optimal solution of the optimization problem must be found on the boundary of the distortion constraint set. In other words, the distortion constraint is satisfied in an equality form at the optimal solution. Therefore, satisfies
| (13) |
Let ; then, we get
where the last inequality holds according to the Cauchy–Schwarz inequality. Thus, the equation has a unique solution when D is achievable. Furthermore, based on the analysis in [11], we obtain that the minimum achievable distortion is . This corresponds to the case where the sent information is fully decoded at the SenRx to assist in estimation, yielding a minimum distortion identical to that of a monostatic ISAC system. The maximum achievable distortion is , where the input distribution is selected to maximize the communication rate . In this case, the SenRx does not perform decoding and instead uses the received signal directly to estimate the state, thereby maximizing the estimation error while achieving the same maximum communication rate as a monostatic ISAC system. Therefore, we obtain that the achievable distortion range is . Based on the derivation above, we present the proposed extended AB algorithm for optimization problem (2) in Algorithm 1, where the original variable p and the additionally introduced variables q and c are updated in closed form.
| Algorithm 1 Extended AB algorithm for the optimization problem (2) |
It is worth noting that the newly introduced variable in our optimization problem effectively replaces the estimator in the constraint—involving a similar operation that is also performed for the case of log-loss distortion—as demonstrated later. This substitution is motivated by two key reasons: Non-convexity elimination: The non-convexity of the constraints caused by the non-convexity of the estimator with respect to the optimization variable is mitigated by relaxing the estimator into a free variable, thereby transforming the non-convex constraints into linear ones. Optimality preservation: The optimality of the estimator ensures that the update expression for this free variable exactly matches the form of the original estimator, as shown in (10), guaranteeing both solution quality and algorithm convergence.
Remark 1.
For the Gaussian channel model with a power constraint for the bistatic ISAC system, the corresponding optimization problem with an SE distortion constraint and a power constraint is
(14) For this optimization problem, Algorithm 1 can be applied by replacing Step 4 with solving a set of equations to determine the multipliers λ and μ introduced due to the distortion and power constraints. In addition, the updated in Step 5 is based on the following form:
Remark 2.
Note that the proposed algorithm does not require traversing the multiplier and can directly calculate the corresponding communication rate for a given distortion, which improves the defects of the classic AB algorithm, including the AB-type algorithm used to calculate the capacity–distortion trade-off of a monostatic ISAC system [6] (Theorem 4).
3.2. Algorithm for Optimization Problem (4)
In this subsection, we apply the proposed framework to solve the optimization problem involving a log-loss distortion constraint. Following the framework discussed in the previous subsection, we replace the estimator in the optimization problem (4) with a free function . Then, similar to the results above, we get that the optimal solution of the original optimization problem (4) is the same as that of the optimization problem as
In addition, the optimal satisfies
| (15) |
the optimal satisfies
| (16) |
and the optimal satisfies
| (17) |
where , , and satisfies
| (18) |
Similar to the analysis in the previous section, we obtain that the equation has a unique solution when D is achievable. In addition, the minimum achievable distortion is , and the maximum achievable distortion is , where the input distribution is selected to maximize the communication rate . Thus, we obtain the extended AB algorithm in Algorithm 2 to solve the optimization problem (4), the details of which are omitted for convenience.
| Algorithm 2 Extended AB algorithm for the optimization problem (4) |
3.3. Convergence Analysis
In this subsection, we primarily demonstrate the convergence of Algorithm 1. Let and be the values of the variables and at the -th iteration of the algorithm, respectively. We have the following result for the function values in (7) generated in the iterations.
Theorem 2.
The function values generated in the iterations of Algorithm 1 are monotonically non-decreasing and bounded, which satisfy .
Proof of Theorem 2.
For convenience, we omit the variables corresponding to the probability distribution when there is no ambiguity. Recalling the definition of in (7) and the update expression of q in (9), we get
where refers to the conditional density function generated by , i.e., , and the others are similar.
According to the update expression of in (12), we obtain that
Due to the fact that satisfies , we have
Therefore, we get
where the last inequality holds according to the fact that is the optimal estimator when the corresponding input distribution is , i.e., its corresponding distortion is minimized. Furthermore, we have , which completes the proof. □
Unfortunately, due to the non-convex nature of the optimization problem, whether this algorithm can converge to the global optimal solution remains an open question.
4. Extension to Lossy Source Coding with Side Information at Decoder
The optimization problem formulated for the bistatic ISAC system has a similar form to the lossy source coding problem with side information available at the decoder. Given this structural similarity, our proposed algorithm for the bistatic ISAC problem can be adapted to solve this related source coding problem. Consider the problem of source coding, where the decoder has access to side information about the source. As shown in Figure 4, X is a source, Z is a compressed version of source X, and the decoder produces an estimate of the source X based on Z and the side information Y related to source X. The system aims to minimize the compression ratio R from X to Z while ensuring that the distortion between X and remains less than a given value. Regarding this issue, the following conclusion holds.
Figure 4.
Source coding with side information at the decoder (see Figure 2 in [19]).
Lemma 6
(Theorem 1, [19]). The capacity–distortion function for the above system is the optimal solution to the following optimization problem:
(19) where the joint distribution of is given by .
For the optimization problem (19), if SE is selected as the distortion function, the corresponding optimization problem is shown in (20) as
| (20) |
where
It is observed that the constraint is non-convex with respect to the optimization variable, distinguishing it from the classic source coding problem, which leads to the traditional AB algorithm used for solving rate-distortion problems being inapplicable. Given its structural similarity to the optimization problem (2), the proposed extended AB algorithm can be employed to solve it. The corresponding procedure is provided in Appendix A.
Furthermore, if log-loss is selected as the distortion function, the corresponding optimization problem is shown in (21) as
| (21) |
A variation of this problem is the optimization problem with mutual information constraints, which is shown as follows:
| (22) |
and the two optimization problems are equivalent when . The problem (22) resembles the information bottleneck problem and thus can be solved by leveraging related optimization algorithms [20,21,22].
5. Numerical Simulations
In this section, we evaluate the performance of the proposed algorithms for bistatic ISAC systems and the source coding problem with side information, respectively. In order to facilitate comparison with existing conclusions, in the simulations, we set the base of the logarithmic function to 2. To apply the extended AB algorithm to a continuous distribution, discretization is necessary. In the subsequent simulations, we fix the input power at dB and uniformly discretize the input X over the range , where q is the step size. For a Gaussian distribution , we truncate its range to the interval and quantize it to . Furthermore, the multipliers are solved for directly using the fsolve function from Optimization Toolbox in MATLAB 2023b.
5.1. State-Sum Gaussian Channel for ISAC Systems
Consider an additive Gaussian channel model with a power constraint for the bistatic ISAC system. The channel from the ISAC Tx to the ComRx is and the channel from the ISAC Tx to the SenRx is , where S, , and are independently generated from , and . For this channel model with , we have the following quantization schemes: the input X is quantized as , the state S is quantized as , and both noise terms and are quantized as . Consequently, the output Y is quantized to , and the quantized output Z shares the same alphabet as . Furthermore, Algorithm 1 indicates that the computational complexity of the algorithm is of order , where denotes the size of the alphabet of variable X. Under the given channel model and the quantization setup, this corresponds to an approximate complexity of .
In Figure 5, we evaluate the impact of the quantization step size q on the value of rate for a given distortion. It is observed that a smaller q (meaning a finer discretization grid) accordingly leads to a larger rate value. However, considering the computational complexity of the algorithm and hardware limitations, we set in the subsequent simulations.
Figure 5.

The impact of quantization step size on the value of rate.
By applying the proposed extended AB algorithm to the discretized model, we obtained the rate-distortion functions for a monostatic ISAC system and a bistatic ISAC system shown in Figure 6. In a monostatic ISAC system, the estimator is aware of X, resulting in the distortion being , which is independent of the distribution of X. On the other hand, the rate is , which reaches a maximum value when . As illustrated in Figure 6, the capacity–distortion function for the monostatic ISAC system is located to the upper left of the capacity–distortion function of the bistatic system, indicating that the system suffers from a performance loss when the estimator lacks knowledge of the sent information.
Figure 6.

Rate-distortion function of different ISAC systems with SE distortion constraint.
For the case where the distortion metric is log-loss, we replace the channel from the ISAC Tx to the ComRx in the additive Gaussian channel model above with to be consistent with the model in [12]. For this additive Gaussian channel model, the authors of [12] presented a closed-form solution under the condition that . Additionally, they provided a lower bound for the optimal solution when .
Figure 7 depicts the rate-distortion functions of the bistatic ISAC system with the log-loss distortion constraint under different parameters. Specifically, the channel parameters satisfy , , and , and for the first case and for the second case. It is observed that the rate-distortion function curve obtained by the algorithm aligns well with the theoretical value for the first case, and the obtained rate-distortion function curve outperforms the theoretical lower bound [12] for the second case. In summary, the effectiveness of the proposed algorithm is verified.
Figure 7.

Rate-distortion function of the bistatic ISAC system with log-loss distortion constraint.
5.2. State-Product Gaussian Channel for ISAC Systems
Consider a real Gaussian channel with Rayleigh fading, which incorporates a multiplicative Gaussian state. Specifically, the channel from the ISAC Tx to the ComRx is , and the channel from the ISAC Tx to the SenRx is , where S, , and are independently generated from , and . For this channel model with , we adopt a quantization scheme similar to that used for the state-sum Gaussian channel, except that both noise terms and are quantized to due to the multiplicative nature of the channel. The resulting algorithmic complexity is approximately .
In Figure 8, we evaluate the impact of quantization step size q on the value of rate for a given distortion. Considering the computational complexity of the algorithm and hardware limitations, we set in the subsequent simulations.
Figure 8.

The impact of quantization step size on the value of rate.
In Figure 9, we compare the capacity–distortion functions of the state-product Gaussian channel model for a monostatic ISAC system, a bistatic ISAC system, and the time-sharing (TS) schemes. The TS scheme refers to independent communication and sensing in a time-sharing manner. For the monostatic ISAC system, we plotted its capacity distortion function by exploiting the AB type algorithm proposed in [6]. For the bistatic ISAC system, the SenRx estimates the state S from the received signal under the condition of unknown sent signal X. We plotted its capacity distortion function according to the proposed extended AB algorithm. In addition, for both monostatic and bistatic ISAC systems, the input distribution of the endpoints in the TS scheme is identical. In this scheme, the lower-left endpoint is achieved via 2-ary pulse amplitude modulation (PAM), while the upper-right endpoint is realized with a Gaussian input . As illustrated in Figure 9, the capacity–distortion function for the monostatic ISAC system is located to the upper left of the capacity–distortion function of the bistatic system. The performance degradation in the bistatic configurations stems from the spatial separation of the transmitter and receiver. This separation deprives the estimator of the direct knowledge of the transmitted signal, which is inherently available in a monostatic system. The simulations were initialized with a random matrix and stopped when the 2-norm difference between successive input distribution matrices was less than . Furthermore, it was observed that the results from multiple trials were nearly consistent, with rate value discrepancies not exceeding , despite a completely random assignment of initial values. The consistency confirms the weak dependence of the simulation results on the chosen initial conditions.
Figure 9.

Capacity–distortion functions for state-product Gaussian channel with SE distortion constraint.
5.3. Source Coding Problem with Side Information
In this subsection, we verify the effectiveness of our algorithm for the source coding problem in Section 4. Specifically, we consider the cases of Gaussian sources, uniform sources, and mixed Gaussian sources. For Gaussian sources, let , and , where X and N are independent, with . The authors of [19] provided a closed-form expression for the optimal solution as follows:
| (23) |
In Figure 10, we compare the proposed extended AB algorithm with the theoretical value in (23) for a Gaussian source. It is observed that the rate-distortion curve obtained by the algorithm is consistent with the theoretical value, which verifies the effectiveness of the algorithm.
Figure 10.

Rate-distortion function of Gaussian source.
In the following, we consider the cases where the source is uniformly distributed and mixed Gaussian distributed. Assume , where X and N are independent, with for convenience.
Figure 11 illustrates the rate-distortion function of the uniform source where the sources X satisfy where . It is observed that under this model setting, the rate-distortion function of the uniform source only remains related to its interval length, i.e., . Moreover, the longer the interval length, the more to the right the corresponding rate-distortion curve is.
Figure 11.

Rate-distortion function of uniform source.
Figure 12 depicts the rate-distortion function of the mixed Gaussian sources, which are composed of Gaussian distribution and Gaussian distribution with different weights. It is observed that from left to right in Figure 12, the weight of Gaussian distribution in the mixed Gaussian distribution increases continuously, and the corresponding rate-distortion curve gradually approaches that of Gaussian distribution , which is consistent with our expectations.
Figure 12.

Rate-distortion function of mixed Gaussian source.
6. Conclusions
In this paper, we proposed an extended AB framework to solve capacity–distortion functions for bistatic ISAC systems. Specifically, for the corresponding non-convex optimization problems with SE and log-loss distortion constraints, we introduced auxiliary variables to transform the original non-convex constraints into linear forms, while ensuring that the reformulated linearly constrained optimization problems maintained the same optimal solution as the original problems. Building on the AB algorithm framework, we then developed extended AB algorithms for the reformulated optimization problems that retain closed-form variable updates while overcoming the limitations of handling non-convex constraints. Furthermore, we extended the proposed algorithm to solve lossy source coding problems with side information. Numerical simulations are presented, which validate the effectiveness of the proposed algorithms and provide a visual comparison of the performance in bistatic versus monostatic ISAC systems.
Appendix A. Algorithm for Optimization Problem in Source Coding with Side Information
In the following, we apply the proposed framework to solve the optimization problem (20). Specifically, following the idea of the AB algorithm, we write the objective function of the original optimization problem (20) as
and define
| (A1) |
which is a convex function of the optimization variable for fixed . Furthermore, we introduce a free variable to replace the estimator . Then, similar to the analysis in Section 3, we get that the optimal solution of the original optimization problem is the same as that of the optimization problem
In addition, the optimal satisfies
| (A2) |
the optimal satisfies
| (A3) |
and the optimal satisfies
| (A4) |
where , , and satisfies
| (A5) |
Thus, an extended AB algorithm is developed to solve the optimization problem for source coding with side information, as detailed in Algorithm A1.
| Algorithm A1 Extended AB algorithm for the optimization problem (20) |
Author Contributions
Methodology and analysis, T.J.; software and visualization, T.J. and Y.G.; validation, T.J., Y.G., Z.W. and Z.Y.; writing—original draft preparation, T.J.; writing—review and editing, T.J., Y.G., Z.W. and Z.Y. All authors have read and agreed to the published version of the manuscript.
Institutional Review Board Statement
Not applicable.
Data Availability Statement
The original contributions presented in the study are included in the article; further inquiries can be directed to the first author.
Conflicts of Interest
All authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.
Funding Statement
The work of Z. Yang was supported by the National Key R&D Program of China (Grant No. 2023YFA1008600). The work of Y. Geng was supported by the National Key R&D Program of China (Grant No. 2021YFA1000500).
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
References
- 1.Xiao Z., Zeng Y. Waveform design and performance analysis for full-duplex integrated sensing and communication. IEEE J. Sel. Areas Commun. 2022;40:1823–1837. doi: 10.1109/JSAC.2022.3155509. [DOI] [Google Scholar]
- 2.Gaudio L., Kobayashi M., Caire G., Colavolpe G. On the effectiveness of OTFS for joint radar parameter estimation and communication. IEEE Trans. Wireless Commun. 2020;19:5951–5965. doi: 10.1109/TWC.2020.2998583. [DOI] [Google Scholar]
- 3.Gao Z., Wan Z., Zheng D., Tan S., Masouros C., Ng D.W.K., Chen S. Integrated Sensing and Communication With mmWave Massive MIMO: A Compressed Sampling Perspective. IEEE Trans. Wireless Commun. 2023;22:1745–1762. doi: 10.1109/TWC.2022.3206614. [DOI] [Google Scholar]
- 4.Elbir A.M., Mishra K.V., Shankar M.B., Chatzinotas S. The rise of intelligent reflecting surfaces in integrated sensing and communications paradigms. IEEE Netw. 2022;37:224–231. doi: 10.1109/MNET.128.2200446. [DOI] [Google Scholar]
- 5.Sankar R.P., Chepuri S.P., Eldar Y.C. Beamforming in integrated sensing and communication systems with reconfigurable intelligent surfaces. IEEE Trans. Wireless Commun. 2024;23:4017–4031. doi: 10.1109/TWC.2023.3313938. [DOI] [Google Scholar]
- 6.Ahmadipour M., Kobayashi M., Wigger M., Caire G. An information-theoretic approach to joint sensing and communication. IEEE Trans. Inf. Theory. 2022;70:1124–1146. doi: 10.1109/TIT.2022.3176139. [DOI] [Google Scholar]
- 7.Xiong Y., Liu F., Cui Y., Yuan W., Han T.X., Caire G. On the fundamental tradeoff of integrated sensing and communications under Gaussian channels. IEEE Trans. Inf. Theory. 2023;69:5723–5751. doi: 10.1109/TIT.2023.3284449. [DOI] [Google Scholar]
- 8.Liu F., Xiong Y., Wan K., Han T.X., Caire G. Deterministic-random tradeoff of integrated sensing and communications in Gaussian channels: A rate-distortion perspective; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Taipei, Taiwan. 25–30 June 2023; pp. 2326–2331. [Google Scholar]
- 9.Liu Y., Li M., Liu A., Lu J., Han T.X. Information-theoretic limits of integrated sensing and communication with correlated sensing and channel states for vehicular networks. IEEE Trans. Veh. Technol. 2022;71:10161–10166. doi: 10.1109/TVT.2022.3179869. [DOI] [Google Scholar]
- 10.Ahmadipour M., Wigger M., Shamai S. Strong converses for memoryless bi-static ISAC; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Taipei, Taiwan. 25–30 June 2023; pp. 1818–1823. [Google Scholar]
- 11.Jiao T., Wan K., Wei Z., Geng Y., Li Y., Yang Z., Caire G. Information-theoretic limits of bistatic integrated sensing and communication. IEEE Trans. Inf. Theory. 2025;71:9302–9318. doi: 10.1109/tit.2025.3621465. [DOI] [Google Scholar]
- 12.Chen J., Yu L., Li Y., Shi W., Ge Y., Tong W. On the Fundamental Limits of Integrated Sensing and Communications Under Logarithmic Loss. arXiv. 2025 doi: 10.1109/TCOMM.2025.3624157.2502.08502 [DOI] [Google Scholar]
- 13.Arimoto S. An algorithm for computing the capacity of arbitrary discrete memoryless channels. IEEE Trans. Inf. Theory. 1972;18:14–20. doi: 10.1109/TIT.1972.1054753. [DOI] [Google Scholar]
- 14.Blahut R. Computation of channel capacity and rate-distortion functions. IEEE Trans. Inf. Theory. 1972;18:460–473. doi: 10.1109/TIT.1972.1054855. [DOI] [Google Scholar]
- 15.Yasui K., Matsushima T. Toward computing the capacity region of degraded broadcast channel; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Austin, TX, USA. 13–18 June 2010; pp. 570–574. [Google Scholar]
- 16.Liu Y., Geng Y. Blahut-Arimoto Algorithms for Computing Capacity Bounds of Broadcast Channels; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Espoo, Finland. 26 June–1 July 2022; pp. 1145–1150. [Google Scholar]
- 17.Dou Y., Liu Y., Niu X., Bai B., Han W., Geng Y. Blahut–Arimoto Algorithms for Inner and Outer Bounds on Capacity Regions of Broadcast Channels. Entropy. 2024;26:178. doi: 10.3390/e26030178. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Joudeh H., Caire G. Joint communication and state sensing under logarithmic loss; Proceedings of the IEEE International Symposium on Joint Communications & Sensing (JC&S); Leuven, Belgium. 19–21 March 2024; pp. 1–6. [Google Scholar]
- 19.Wyner A., Ziv J. The rate-distortion function for source coding with side information at the decoder. IEEE Trans. Inf. Theory. 2003;22:1–10. doi: 10.1109/TIT.1976.1055508. [DOI] [Google Scholar]
- 20.Goldfeld Z., Polyanskiy Y. The information bottleneck problem and its applications in machine learning. IEEE J. Sel. Areas Inf. Theory. 2020;1:19–38. doi: 10.1109/JSAIT.2020.2991561. [DOI] [Google Scholar]
- 21.Huang T.H., El Gamal A. A provably convergent information bottleneck solution via admm; Proceedings of the IEEE International Symposium on Information Theory (ISIT); Melbourne, Australia. 12–20 July 2021; pp. 43–48. [Google Scholar]
- 22.Chen L., Shitong W., Jiachuan Y., Huihui W., Zhang W., Hao W. Efficient and Provably Convergent Computation of Information Bottleneck: A Semi-Relaxed Approach; Proceedings of the IEEE International Conference on Communications (ICC); Denver, CO, USA. 9–13 June 2024; pp. 1637–1642. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The original contributions presented in the study are included in the article; further inquiries can be directed to the first author.


