Skip to main content
Medical Physics logoLink to Medical Physics
. 2011 Nov 23;38(12):6603–6609. doi: 10.1118/1.3660200

Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment

Bowen Meng 1, Guillem Pratx 2, Lei Xing 3,a)
PMCID: PMC3247927  PMID: 22149842

Abstract

Purpose: Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT/CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment.

Methods: In this work, we accelerated the Feldcamp–Davis–Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT/CT reconstruction algorithm.

Results: Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10−7. Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process.

Conclusions: An ultrafast, reliable and scalable 4D CBCT/CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment.

Keywords: CBCT, 4DCT, MapReduce, cloud computing, FDK

INTRODUCTION

Four-dimensional CT (4DCT) and cone beam CT (CBCT) are used clinically in radiation therapy for accurate and updated tumor target definition and localization.1, 2 Long reconstruction times in CBCT and 4DCT give rise to the need for a fast and reliable technique for various applications in treatment planning.3 The computational complexity—linearly dependent on the volume size, phase number and the number of projections in each phase—prohibits the use of single-threaded processing. Moreover, the rapidly increasing size of CT projection data makes it challenging to perform reconstruction on a local machine with limited hardware capacity. Accordingly, practical 4D CBCT reconstruction is inevitably shifting to distributed and parallel architectures for higher efficiency.

Some of the cloud computing technologies used to solve Internet-scale problems can be applied to medical physics computational problems.4, 5, 6 MapReduce,7 developed at Google (Google, Inc., Mountain View, CA) for reliable computing on large-scale parallel architectures, is designed to facilitate the development of data processing applications onto massively-parallel architectures, such as a cloud computing environment. It is the result of recent developments in virtualization technology and years of research in distributed computing. Virtualization software abstracts the underlying hardware architectures (such as servers, storage, and networking) and allows nodes to be created on demand with flexible specification of hardware parameters, such as the number of processors, memory, disk size, and operating system.

The most powerful feature of MapReduce is its simplicity. Existing codes can be naturally translated into the Map/Reduce paradigm, and developers can schedule many computer nodes on demand without worrying about the underlying hardware architecture. MapReduce also hides the parallelization, data distribution, fault tolerance, and load balancing from developers, allowing programmers to focus on the design of Map and Reduce functions.

Although MapReduce implementations can run on several types of architectures, including dedicated clusters and graphics processing units (GPUs),8 its full potential is achieved in a cloud computing environment. Cloud computing providers offer a wide variety of services, including web-based software, data storage, and built-in MapReduce capability, facilitating the use of distributed computing for developers without in-depth knowledge on parallelization.

The MapReduce framework is presented in Sec. 2. In Sec. 3, we introduce a novel strategy for performing Feldcamp–Davis–Kress10 (FDK) reconstruction using MapReduce. Section 4 demonstrates the efficiency and reliability of the proposed method and Sec. 5 is discussion.

MAPREDUCE

MapReduce is a programming framework for processing large data sets on clusters of computers (nodes). In this framework, developers specify a Map function that takes input data and generates a set of intermediate Key/Value pairs, and a Reduce function that merges all the intermediate values that share the same intermediate key

map:v1list(k2,v2),reduce:[k2,list(v2)]v3. (1)

On a large cluster of commodity machines, the input data are first split into chunks that are used as input by Map tasks. Map and Reduce tasks are automatically distributed and executed in a parallel fashion. Data communication in MapReduce uses Key/Value pairs, where the key and value can be structured into different formats. The run-time system takes care of the low-level details, such as partitioning the input data, scheduling the tasks execution across a set of machines, handling machine failures, and managing internode communications. The number of Map and Reduce tasks can be specified by developers; typically, a larger number of tasks will result in better processing granularity but higher overhead. Figure 1 illustrates the workflow of the MapReduce framework.

Figure 1.

Figure 1

An overview of the MapReduce framework. A user program specifies the Map and Reduce functions. A master node assigns and monitors the Map and Reduce task workers. The input files are divided into splits, which are then sent to the Map worker. Intermediate records, comprised of Key/Value pairs, are emitted from the Map worker and then fed to the Reduce workers, which combine data with same key and produces the final output.

MapReduce jobs are guaranteed to success, thanks to two features, namely, fault tolerance and speculative execution. In a large cluster, some nodes might fail while processing a job. MapReduce can reschedule tasks from a failed node to other nodes in order to avoid a crash of the entire job. When processing a task on a cluster of machines with heterogeneous performance, MapReduce can also balance the workload among different nodes to keep the overall speed optimized.

Although the original MapReduce software from Google is proprietary, several open-source alternatives are available. One of them, called Hadoop,9 is broadly used for large-scale data processing, financial simulations, and bioinformatics calculations.11, 12, 13 The Hadoop project includes the Hadoop distributed file system (HDFS) and Hadoop MapReduce. Hadoop is written in Java, but applications written in other programming languages can be implemented through a utility called Hadoop streaming. Hadoop streaming allows users to create and run MapReduce jobs based on any executable or script (Fig. 2). In this scheme, intermediate Key/Value pairs are transferred using standard UNIX streams, and formatted as strings or more efficient Typedbytes, which are a sequence of coded bytes in which the first byte is a type code.

Figure 2.

Figure 2

A depiction of Hadoop streaming. User-defined external applications take place of the Map and Reduce functions.

METHOD

CBCT reconstruction

We assume that the source position, rotation center (also the reconstructed volume center), and projection center are along the same line. The source-to-detector distance is denoted as sdd, the source-to-rotation center distance as srd and the detector size is U by V. (All the notations for variables are directly used hereafter.) Given a series of 2D projections P1,P2,...,PM collected with one rotation scan at angles θ1,θ2,...θM, the FDK algorithm first applies the Shepp–Logan filter to each projection, which helps to reduce the noise in the final image. The detector pixel value Pk(u,v) at point (u,v) in the projection space is converted to value Qk(u,v) in the following form:

Qk(u,v)=fs(u)(W1(u,v)Pk(u,v)), (2)
W1(u,v)=sdd(sdd2+u2+v2), (3)

where fs(u) is 1D the Shepp–Logan filter and the convolution operation is 1D. W1(u,v) denotes the cosine weighting factor. The filtered data Q1,Q2,...,QM are then fed into the backprojection step to reconstruct the 3D volume V(x,y,z)

V(x,y,z)=2πMk=1MW2(x,y,k)Qk(u(x,y,z),v(x,y,z)), (4)
W2(x,y,k)=(srdsrd-xcosθk-ysinθk)2, (5)
u(x,y,z)=sdd×(-xsinθk+ycosθk)(srd-xsinθk-ycosθk), (6)
v(x,y,z)=sdd×z(srd-xsinθk-ycosθk), (7)

where W2(x,y,k) is the scaling factor. Since u(x,y,z) and v(x,y,z) often are not integers, linear interpolation is used to obtain the associated voxel value.

MapReduce implementation

The FDK algorithm processes each projection independently and thus can be parallelized. We propose two novel implementations of the FDK algorithm using MapReduce. In both implementations, the Map tasks load an equal number of projections from the HDFS. The only difference between these two implementations is the types of Key/Value pair used to transfer data between Map and Reduce tasks. The first one, namely voxel-based FDK (FDK-VB), uses Key/Value pairs to transfer individual voxels. In this scheme, the Key encodes the voxel position (x,y,z) using an index calculated as (z×N2+y×N+x), and the Value is the backprojected image intensity V(x,y,z). Each Map task emits Key/Value pairs voxel by voxel. The Reduce function accumulates all the intermediate values that share a common same key, producing the final reconstructed volume. Figure 3 illustrates the process of the FDK-VB algorithm. The other method, termed slice-based FDK (FDK-SB), transfers an entire volume slice as one Key/Value pair in the Map function, wherein the key is the slice index, and the value is an array of voxel intensities. The Reduce function accordingly combines those partial data into one final output volume.

Figure 3.

Figure 3

An overview of the proposed FDK-VB algorithm. Projections are first downloaded from the distributed storage to local storage. Voxel-based Key/Value pairs are transferred between the Map and Reduce functions. The FDK-SB algorithm differs from FDK-VB only in the slice-based Key/Value pairs.

In both implementations, the Map and Reduce function were written in the C language and executed using Hadoop streaming.9 A text file containing paths of projection files in HDFS was used as the input to MapReduce. The Map function downloads several projections from distributed storage to its local hard-drive. After loading these projections into memory, it filters and backprojects them into an image buffer, which is transmitted back to Hadoop. The Reduce function collects partial images from Hadoop and writes the final image back to the distributed storage (Fig. 3). All the communications through UNIX streams use the Typedbytes format for higher efficiency.

Computing environment

Two different Hadoop clusters were used. One cluster was set-up in a pseudo-distributed fashion, by installing Hadoop 0.21.0 on a 2.4 GHz Intel Core 2 Duo computer. It was used primarily for developing and debugging MapReduce jobs. All the data were stored in a local HDFS.

Amazon’s elastic compute cloud (EC2) and the elastic MapReduce (EMR) service (an implementation of Hadoop 0.20) were employed to perform experimental characterization of the implementation on large-scale clusters. For fair comparison, all algorithms were run on Amazon EC2 high-memory extra large nodes (codenamed as m2.xlarge). These nodes are equipped with 17.1 GB of memory, and 6.5 EC2 compute units (one EC2 compute unit provides the equivalent CPU capacity of a 1.0–1.2 GHz 2007 Opteron or 2007 Xeon processor).

The Map and Reduce applications were compiled remotely on the cloud using GCC version 4.3.2, and uploaded to Amazon’s simple storage service (S3) together with projection files and a file list. EMR jobs were submitted from the local computer through a Ruby-based command-line interface.

Evaluation

Evaluation of the new method was performed for a simulated digital phantom and for a physical anthropomorphic phantom. The digital phantom was the standard 3D Shepp–Logan phantom. The coordinates of the CBCT system were set as sdd = 1500 mm and srd = 1000 mm. Four hundred projections were collected in one rotation, each projection being 900 × 0.388 mm and 400 × 0.388 mm in size. The reconstructed image was 512 × 512 × 200 voxels, with the voxel size being 0.3883 mm3.

A commercial calibration phantom CatPhan 600 (The Phantom Laboratory, Inc., Salem, NY) was used to evaluate the performance of the proposed method. CT projection data were acquired on a Varian TrueBeam STX (Varian Medical Systems, Palo Alto, CA) radiation treatment system using the on-board CBCT imaging system. The tube voltage and current were set to 100 kV and 138 mA, respectively, and the duration of the x-ray pulse at each projection view was set to 11 ms. In the TrueBeam system, the source-to-axis distance is 1000 mm and the source-to-detector distance is 1500 mm.

As a proof of concept, the phantom moved only in the longitudinal direction. The gantry rotation was purposely slowed down and synchronized to the phantom motion, such that there were six phases with each containing 328 projections. The dimension of each acquired projection image was 397 mm × 298 mm, containing 1024 × 768 pixels. The size of reconstructed image was 512 × 512 × 200 and each voxel size was 0.3883 mm3.

In a first experiment, we reconstructed the digital and the CatPhan phantoms using three different algorithms, namely, single-threaded FDK (FDK-ST) performed on one node as a benchmark, and FDK-VB and FDK-SB algorithms executed on 200 nodes for parallelization. The FDK-ST implementation was adapted from plastimatch, a publicly-available library of tools for tomographic imaging.14 All the output datasets were downloaded to a local computer and analyzed with MATLAB (MathWorks, Inc., Natick, MA).

In a second experiment, we tested the scalability of the FDK-SB algorithm by reconstructing the digital phantom on a variable number of m2.xlarge nodes, ranging from 20 to 200. For fine task granularity, the number of Map tasks was set to the number of projections and the number of Reduce task was set to the number of slices. This scheme was shown to optimally balance the MapReduce workload.15 The total runtimes were recorded and the results were compared.

In a last experiment, we simulated the failure of some cluster nodes during the FDK-SB algorithm execution by manually terminating those nodes through the EC2 control panel. Half of the 100 nodes (m2.xlarge) were shut down 5 min after the start of the digital phantom reconstruction.

RESULT

The reconstruction times are collected in Table TABLE I. for the various phantoms and algorithms. The proposed FDK-SB method is more than ten times faster than the reference FDK-ST implementation. Furthermore, it also outperforms the FDK-VB method by about a factor of two. For algorithms based on MapReduce, the reconstruction time excludes the cluster initialization time.

Table 1.

Simulation time comparison.

    Projection size Volume size # Node # Map # Reduce Time (min)
Shepp–Logan phantom FDK-ST 900 × 400 × 400 512 × 512 × 200 N/A N/A N/A 54.7
FDK-VB 900 × 400 × 400 512 × 512 × 200 200 400 200 10.5
FDK-SB 900 × 400 × 400 512 × 512 × 200 200 400 200 5.4
CatPhan phantom FDK-ST 1024 × 678 × 328 512 × 512 × 200 N/A N/A N/A 65.3
FDK-VB 1024 × 678 × 328 512 × 512 × 200 200 400 200 10.9
FDK-SB 1024 × 678 × 328 512 × 512 × 200 200 400 200 5.7

Figure 4 shows the reconstructed images for different FDK implementations. For both cases, the deviations between the image obtained with the proposed methods and the FDK-ST implementation (ground truth) was on the order of 10−7 (root mean square error), with the small discrepancy due to quantization errors and data format conversion.

Figure 4.

Figure 4

Digital Shepp–Logan phantom (first row) and 4D CatPhan phantom (second row) reconstructed with FDK-ST (left column), FDK-VB algorithm (middle column) and FDK-SB algorithm (right column). Normalized image window: [0 1].

The simulation time using the FDK-SB algorithm for different numbers of nodes is shown on Fig. 5. The numbers of nodes are set to be 10, 20, 50, 100, and 200, respectively. The fitting curve is calculated as time=6.4+202.4/N, where the first term is a constant term, representing the total overhead and communication delay of the distributed system and the second term is an inverse term of the number of nodes, interpreted as the efficiency brought by more computing nodes.

Figure 5.

Figure 5

The reconstruction times using FDK-SB method with different numbers of nodes for the 512 × 512 × 200 Shepp–Logan phantom were collected. Linear regression shows that the computation time can be approximated by an affine function of the inverse of number nodes.

In the last experiment, the FDK-SB algorithm successfully completed with identical results even though half of the worker nodes were manually terminated during the reconstruction. The overall run time increased by about 10 min due to reallocating tasks from terminated nodes to other nodes.

DISCUSSION

Cloud computing, multicore PC and GPU are types of hardware for parallel computing, while MapReduce and MPI are software frameworks running on those hardware. It should be noted that MapReduce can run on a wide range of architectures, including a cloud computing environment, a mutlicore workstation, a computer cluster or a GPU. Multicore PC have improved performance and better energy efficiency compared with single-core PC, however, their computing resources are limited (current multicore PC can have tens of cores maximally) and overall performance is contingent on the execution of multiple threads within applications. GPUs, originally designed for accelerating computer graphics, are increasingly used as massively-parallel coprocessors for scientific computation.17 Compared with multicore CPUs, GPUs can run hundreds of threads with no context switching overhead, which makes it much more efficient than multicore CPUs. However, GPUs suffer from a few drawbacks, including complicated memory programming, slow random memory access, hardware constraints, and constraining single-program multiple data programming model.17 Cloud computing connects shared computing resources, software, and information to computers and other devices by means of high-speed network links. Data exchange in cloud computing is generally slower than GPUs, because the latter is equipped with shared on-chip memory. However, MapReduce can avoid data access hazards which are problematic in GPUs by allocating its own data element locally. Additionally, MapReduce running in a cloud computing environment provides better scalability than GPUs. With multiple GPUs employed into a cluster, the software must be redesigned to provide internode and intercomputer communications. In contrast, the number of nodes in MapReduce can be modified by simply changing one parameter.

The results demonstrated the efficiency of the FDK-VB and FDK-SB algorithms. The FDK-SB algorithm achieved over tenfold speedup compared with the FDK-ST method. It is also observed that the FDK-SB algorithm outperforms the FDK-VB method by a factor of two, because it not only combines the data locally to lessen the Reduce function’s effort to sort the data, but it also utilizes the I/O more efficiently by transferring large chunks of data for each record. For 4DCT or other iterative reconstruction methods,16 the advantage of using MapReduce in a cloud environment would be more remarkable. For example, iterative reconstruction can be achieved by executing a sequence of Map and Reduce tasks. MapReduce allows the output of a Reduce task to be fed in as the input of a new set of Map tasks. This is the basic principle that can be used for iterative reconstruction. For instance, iterative weighted least-square reconstruction would first require a Map/Reduce step to compute the forward projection along integration lines. These projections would be compared to the measurements and the error would be backprojected using another Map/Reduce step. This process can be repeated for a certain number of iterations and the overall reconstruction time will approach only the serial port time.

MapReduce is designed in a highly decentralized manner to optimize internode data transfer for massive problems. Both data input and output are stored to HDFS, a distributed file system, and both Map and Reduce tasks are performed on various nodes throughout the cluster. Hence, every stage of the computation workflow is scalable and no stage constitutes a bottleneck. However, in our experiments, the speedup achieved by the distributed MapReduce implementation was lower than the number of worker nodes (Fig. 5), which suggests that the overall performance is limited by communication between nodes and overhead. In this work, we model the reconstruction time as a function of a constant term, accounting for the overhead and communication delay, plus an inverse of number of nodes term, representing the benefit of distributed computation. This finding highlight the fact that while Hadoop is a powerful tool for processing terabytes of text, it is less adapted to processing smaller binary data such as CT scans. With further optimization of the Hadoop platform for numerical computations, more acceleration is expected. Moreover, the simplicity, scalability, and reliability of MapReduce remain very attractive for the developer. Further improvements in the technology could easily address its current shortcomings, in particular with respect to data sorting and caching.

Fault resilience is a standard feature of Hadoop and other MapReduce implementations. In EMR, new nodes are automatically provisioned to replace failed nodes. Failed Map and Reduce task are placed back in the queue and eventually redistributed to these newly allocated nodes. The guaranteed reliability of the compute framework is crucial for applications in medical physics.

Parallel computing is commonly performed on clusters of commodity computers. Though such systems have a few advantages such as better control over the hardware, the maintenance, and energy costs can be much higher compared to a cloud computing environment. For instance, the current price for an m2.xlarge node is 0.57$ per hour for N. California users on the Amazon EC2. However, if used intensively, a small cluster of GPUs can be more economical over time than purchasing on-demand clusters.

For best performance, the system parameters of MapReduce need to be finely tuned. In order to execute each task most efficiently, developers need to adjust parameters such as the heap size for child Java Virtual Machines of Map and Reduce functions and the memory limits for transferring data. This requires an in-depth understanding of the computing problem. However, reasonable performance can usually be achieved using the default settings.

CONCLUSION

In this work, a fast, scalable and reliable 4D CBCT/CT reconstruction technique was developed and evaluated using the MapReduce framework. The FDK-SB algorithm on the cloud showed more than ten times speedup compared with the single-threaded implementation. Furthermore, the simplicity, reliability and scalability of the implementation enabled by MapReduce are attractive features not only for CBCT reconstruction, but also for many other projects in medical physics.

ACKNOWLEDGMENT

This project was supported in part by Grant Nos. from NCI (1R01 CA133474) and NSF (0854492).

References

  1. Paquin D., Levy D., and Xing L., “Multiscale registration of planning CT and daily cone beam CT images for adaptive radiation therapy,” Med. Phys. 36, 4–11 (2009). 10.1118/1.3026602 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Xie Y., Chao M., Lee P., and Xing L., “Feature-based rectal contour propagation from planning CT to cone beam CT,” Med. Phys. 35, 4450 (2008). 10.1118/1.2975230 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Okitsu Y., Ino F., and Hagihara K., “Accelerating cone beam reconstruction using the CUDA-enabled GPU,” High Performance Computing HiPC, 15th Annual IEEE International Conference on High Performance Computing (2008), pp. 108–119. 10.1007/978-3-540-89894-8_13 [DOI]
  4. Hayes B., “Cloud computing,” Commun. ACM 51, 9 (2008). 10.1145/1364782.1364786 [DOI] [Google Scholar]
  5. Mika P. and Tummarello G., “Web semantics in the clouds,” IEEE Intell. Syst. 23, 82–87 (2008). 10.1109/MIS.2008.94 [DOI] [Google Scholar]
  6. Schadt E. E., Linderman M. D., Sorenson J., Lee L., and Nolan G. P., “Computational solutions to large-scale data management and analysis,” Nat. Rev. Genet. 11, 647–657 (2010). 10.1038/nrg2857 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Dean J. and Ghemawat S., “MapReduce: Simplified data processing on large clusters,” Commun. ACM 51(1), 107–113 (2008). 10.1145/1327452.1327492 [DOI] [Google Scholar]
  8. He B., Fang W., Luo Q., Govindaraju N. K., and Wang T., “Mars: A MapReduce framework on graphics processors,” Intelligent Systems 23(5), 82–87 (2008). [Google Scholar]
  9. White T., Hadoop: The Definitive Guide (O’Reilly Media, 2009). [Google Scholar]
  10. Feldkamp L. A., Davis L. C., and Kress J. W., “Practical cone-beam algorithm,” J. Opt. Soc. Am. A 1, 612–619 (1984). 10.1364/JOSAA.1.000612 [DOI] [Google Scholar]
  11. MC S., “CloudBurst: Highly sensitive read mapping with MapReduce,” Bioinformatics 25, 1363–1369 (2009). 10.1093/bioinformatics/btp236 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Moretti C., Steinhaeuser K., Thain D., and Chawla N. V., “Scaling up classifiers to cloud computers,” ICDM (2008).
  13. Wang F., Ercegovac V., Syeda-Mahmood T., Holder A., Shekita E., Beymer D., and Xu L. H., “Large-scale multimodal mining for healthcare with mapreduce,” Proceedings of the 1st ACM International Health Informatics Symposium (ACM New York, 2010).
  14. Sharp G., Kandasamy N., Singh H., and Folkert M., “GPU-based streaming architectures for fast cone-beam CT image reconstruction and demons deformable registration,” 52, 5771–5783 (2007). [DOI] [PubMed]
  15. Pratx G. and Xing L., “Monte-Carlo simulation in a cloud computing environment with MapReduce,” J. Biomed. Opt. (in press). [DOI] [PMC free article] [PubMed]
  16. Meng B., Wang J., and Xing L., “Sinogram preprocessing and binary reconstruction for determination of the shape and location of metal objects in computed tomography (CT),” Med. Phys. 5867 (2010). 10.1118/1.3505294 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Pratx G. and Xing L., “GPU computing in medical physics: A review,” Med. Phys. 2685 (2011). [DOI] [PubMed] [Google Scholar]

Articles from Medical Physics are provided here courtesy of American Association of Physicists in Medicine

RESOURCES