Run Apache XTable in AWS Lambda for Efficient Background Table Conversions

November 27, 2024

Data architecture has evolved significantly to handle the explosion of data volumes and the diversity of workloads in modern enterprises. Initially, data warehouses were the primary solution for managing structured data and running analytical workloads, but these systems were limited by their proprietary storage formats and struggled to handle unstructured data. To address these limitations, data lakes emerged, using columnar formats like Apache Parquet, but they brought their own challenges, especially around ACID capabilities. Transactional data lakes have since emerged as a hybrid solution, incorporating the transactional consistency and performance of a data warehouse into the flexibility of a data lake. Central to this evolution are open table formats (OTFs) like Apache Hudi, Apache Iceberg, and Delta Lake, which act as metadata layers over columnar formats to provide schema evolution, partitioning, ACID transactions, and time-travel capabilities.

1. Clone the Repository and Navigate to Directory

To support these capabilities, tools like Apache XTable have surfaced, especially as the industry moves toward interoperability between OTFs. Apache XTable (incubating) provides seamless conversions between different table formats without the need for data duplication or time-consuming rewrites. This project was initially open-sourced in November 2023 under the name OneTable and later donated to the Apache Software Foundation (ASF) in March 2024, being rebranded as Apache XTable. XTable enables users to switch between various table formats as needed, translating metadata without altering the underlying data files. The solution relies on AWS services like AWS Glue Data Catalog, Amazon S3, and AWS Lambda to achieve scalable, cost-effective background conversions.

To begin deploying XTable in your environment, the first step is to clone the GitHub repository containing the necessary code. Make sure to navigate to the specific directory named xtable_lambda within the repository. This directory holds the AWS Cloud Development Kit (AWS CDK) stack that you’ll be deploying for the solution. The AWS CDK allows you to define cloud infrastructure using code, making the deployment process more streamlined and manageable.

2. Set Environment Variable for Finch

The next prerequisite step involves configuring the correct environment for your deployment. If you’re using Finch, an open-source client for container development, it is crucial to set the CDK_DOCKER environment variable before you proceed. This setting ensures that the Docker environment is correctly identified, which is essential for running the necessary Docker containers during deployment. Correct configuration of the development environment is crucial for a smooth and successful deployment. Ensuring that the CDK_DOCKER environment variable is set accurately helps avert many potential issues linked to container orchestration and function runtimes.

3. Deploy the Stack

Once you have navigated to the correct directory and set the necessary environment variables, you can proceed to deploy the AWS CDK stack. Upon successful deployment, a conversion mechanism is established, designed to operate on an hourly schedule. This periodic mechanism scans the AWS Glue Data Catalog for tables that need conversion. Upon detecting a table marked for conversion, the mechanism utilizes AWS Lambda functions to initiate and execute the conversion process using Apache XTable. Deploying the AWS CDK stack is a pivotal step that sets the entire process into motion. By leveraging AWS CloudFormation, the stack automates the setup of required resources and orchestrates the Lambda functions and EventBridge schedules necessary for periodic operations.

4. Add Required Parameters to AWS Glue Table

To enable the Lambda functions to recognize which tables need conversion, you must add specific parameters to the AWS Glue table targeted for conversion. These parameters include the source format and the target formats. Use the following key-value pairs as examples:

  • “xtable_table_type”: “
  • “xtable_target_formats”: “,

Adding these key-value pairs to the AWS Glue table ensures that the Lambda functions can identify and process the appropriate tables during each scheduled scan. Accuracy in entering these parameters is critical for achieving seamless conversions and maintaining data integrity, as the Lambda functions rely on these values to execute the conversion process correctly.

5. Set Table Properties in AWS Glue Console

The next step involves setting the table properties within the AWS Glue console. This configuration step is straightforward yet essential, as it ensures the correct parameters are associated with the tables you intend to convert. When editing an AWS Glue table, navigate to the Table properties section and insert the key-value pairs specified earlier. These settings will guide the Lambda functions in identifying and converting the correct tables during their scheduled operations. Ensuring these settings are accurate prevents errors and ensures that the conversion process is seamless and efficient.

6. Generate Sample Data (Optional)

If you don’t already have sample data, you can opt to generate it using provided scripts. These scripts are available in the repository and can be run on a local machine or within an AWS Glue for Spark job. Having sample data is useful for testing and verifying the setup before deploying it into a production environment. Generating sample data ensures you have a working dataset to validate the conversion process, allowing you to troubleshoot and fine-tune the setup before it goes live.

7. Convert a Streaming Table (Hudi to Iceberg)

To prepare for your deployment, it’s essential to configure the environment correctly. For those leveraging Finch, an open-source client designed for container development, there’s a key preliminary step: setting the CDK_DOCKER environment variable. This setting is vital because it ensures that the Docker environment is correctly recognized, which is necessary for running Docker containers during the deployment phase.

Proper configuration of the development environment is fundamental to achieving a smooth and successful deployment process. Setting the CDK_DOCKER environment variable appropriately can prevent a multitude of potential issues related to container orchestration and runtime functions.

By accurately configuring this environment variable, you can sidestep many common pitfalls that can sabotage deployment. For instance, improper environment settings might lead to containers not starting correctly or runtime errors that can halt the deployment process. These errors can be time-consuming and frustrating to resolve, but they can be largely avoided with the proper setup.

In summary, taking the time to correctly configure your development environment, particularly by setting the CDK_DOCKER environment variable, is an essential step in ensuring that your deployment proceeds without unnecessary complications. It ensures the Docker environment is properly detected and ready for running containers, which is crucial for a seamless deployment experience. This careful preparation will pay off by preventing disruptions and ensuring everything functions as intended during the deployment.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later