Reader small image

You're reading from  Amazon Redshift Cookbook

Product typeBook
Published inJul 2021
Reading LevelBeginner
PublisherPackt
ISBN-139781800569683
Edition1st Edition
Languages
Right arrow
Authors (3):
Shruti Worlikar
Shruti Worlikar
author image
Shruti Worlikar

Shruti Worlikar is a cloud professional with technical expertise in data lakes and analytics across cloud platforms. Her background has led her to become an expert in on-premises-to-cloud migrations and building cloud-based scalable analytics applications. Shruti earned her bachelor's degree in electronics and telecommunications from Mumbai University in 2009 and later earned her masters' degree in telecommunications and network management from Syracuse University in 2011. Her work history includes work at J.P. Morgan Chase, MicroStrategy, and Amazon Web Services (AWS). She is currently working in the role of Manager, Analytics Specialist SA at AWS, helping customers to solve real-world analytics business challenges with cloud solutions and working with service teams to deliver real value. Shruti is the DC Chapter Director for the non-profit Women in Big Data (WiBD) and engages with chapter members to build technical and business skills to support their career advancements. Originally from Mumbai, India, Shruti currently resides in Aldie, VA, with her husband and two kids.
Read more about Shruti Worlikar

Thiyagarajan Arumugam
Thiyagarajan Arumugam
author image
Thiyagarajan Arumugam

Thiyagarajan Arumugam (Thiyagu) is a principal big data solution architect at AWS, architecting and building solutions at scale using big data to enable data-driven decisions. Prior to AWS, Thiyagu as a data engineer built big data solutions at Amazon, operating some of the largest data warehouses and migrating to and managing them. He has worked on automated data pipelines and built data lake-based platforms to manage data at scale for the customers of his data science and business analyst teams. Thiyagu is a certified AWS Solution Architect (Professional), earned his master's degree in mechanical engineering at the Indian Institute of Technology, Delhi, and is the author of several blog posts at AWS on big data. Thiyagu enjoys everything outdoors – running, cycling, ultimate frisbee – and is currently learning to play the Indian classical drum the mrudangam. Thiyagu currently resides in Austin, TX, with his wife and two kids.
Read more about Thiyagarajan Arumugam

Harshida Patel
Harshida Patel
author image
Harshida Patel

Harshida Patel is a senior analytics specialist solution architect at AWS, enabling customers to build scalable data lake and data warehousing applications using AWS analytical services. She has presented Amazon Redshift deep-dive sessions at re:Invent. Harshida has a bachelor's degree in electronics engineering and a master's in electrical and telecommunication engineering. She has over 15 years of experience architecting and building end-to-end data pipelines in the data management space. In the past, Harshida has worked in the insurance and telecommunication industries. She enjoys traveling and spending quality time with friends and family, and she lives in Virginia with her husband and son.
Read more about Harshida Patel

View More author details
Right arrow

Managing UDFs

Scalar UDF functions in Amazon Redshift are routines that are able to take parameters, perform calculations, and return the results. UDFs are handy when performing complex calculations that can be stored and reused in a SQL statement. Amazon Redshift supports UDFs that can be authored using either Python or SQL. In addition, Amazon Redshift also supports AWS Lambda UDFs that open up further possibilities to invoke other AWS services. For example, let's say the latest customer address information is stored in AWS DynamoDB—you can invoke an AWS Lambda UDF to retrieve this using a SQL statement in Amazon Redshift.

Getting ready

To complete this recipe, you will need the following:

  • Access to the AWS console
  • Access to any SQL interface such as a SQL client or query editor
  • Access to create an AWS Lambda function
  • Access to create an Identity and Access Management (IAM) role that can invoke AWS Lambda and attach it to Amazon Redshift

How to do it…

In this recipe, we will start with a scalar Python-based UDF that will be used to parse an XML input:

  1. Connect to Amazon Redshift using the SQL client, and copy and paste the following code to create an f_parse_xml function:
    CREATE OR REPLACE FUNCTION f_parse_xml
    (xml VARCHAR(MAX), input_rank int)
    RETURNS varchar(max)
    STABLE
    AS $$
        import xml.etree.ElementTree as ET
        root = ET.fromstring(xml)
        res = ''
        for country in root.findall('country'):
            rank = country.find('rank').text
            if rank == input_rank:
               res =  name = country.get('name') + ':' + rank
               break
        return res 
    $$ LANGUAGE plpythonu;

    Important note

    The preceding Python-based UDF takes in the XML data and uses the xml.etree.ElementTree library to parse it to locate an element, using the input rank. See https://docs.python.org/3/library/xml.etree.elementtree.html for more options that are available with this XML library.

  2. Now, let's validate the f_parse_xml function using the following statement, by locating the country name that has the rank 2:
    select 
    f_parse_xml('<data>      <country name="Liechtenstein">        <rank>2</rank>         <year>2008</year>         <gdppc>141100</gdppc>         <neighbor name="Austria" direction="E"/>        <neighbor name="Switzerland" direction="W"/> </country></data>', '2') as col1

    This is the expected output:

    col1
    Liechtenstein:2
  3. We will now create another AWS Lambda-based UDF. Navigate to the AWS Management Console and pick the AWS Lambda service and click on Create function, as shown in the following screenshot:
    Figure 2.1 – Creating a Lambda function using the AWS Management Console

    Figure 2.1 – Creating a Lambda function using the AWS Management Console

  4. In the Create function screen, enter rs_lambda under Function name, choose a Python 3.6 runtime, and click on Create function.
  5. Under the Function code textbox, copy and paste the following code and press the Deploy button:
    import json
    def lambda_handler(event, context):
        ret = dict()
        ret['success'] = True
        ret['results'] = ["bar"]
        ret['error_msg'] = "none"
        ret['num_records'] = 1
        return json.dumps(ret)

    In the preceding Python-based Lambda function, a sample result is returned. This function can further be integrated to call any other AWS service—for example, you can invoke AWS Key Management Service (KMS) to encrypt input data.

  6. Navigate to AWS IAM in the AWS Management Console and create a new role, RSInvokeLambda, using the following policy statement by replacing [Your_AWS_Account_Number], [Your_AWS_Region] with your AWS account number/region and attaching the role to the Amazon Redshift cluster:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "lambda:InvokeFunction",
                "Resource": "arn:aws:lambda:[Your_AWS_Region]: [Your_AWS_Account_Number]:function:rs_lambda"
            }
        ]
    }
  7. Connect to Amazon Redshift using the SQL client, and copy and paste the following code to create a f_redshift_lambda function that links the AWS Lambda rs_lambda function:
    CREATE OR REPLACE EXTERNAL FUNCTION f_redshift_lambda (bar varchar)
    RETURNS varchar STABLE
    LAMBDA 'rs_lambda'
    IAM_ROLE 'arn:aws:iam::[Your_AWS_Account_Number]:role/RSInvokeLambda';
  8. You can validate the f_redshift_lambda function by using the following SQL statement:
    select f_redshift_lambda ('input_str') as col1
    --output
    col1
    bar

Amazon Redshift is now able to invoke the AWS Lambda function using a SQL statement.

How it works…

Amazon Redshift allows you to create a scalar UDF using either a SQL SELECT clause or a Python program in addition to the AWS Lambda UDF illustrated in this recipe. The scalar UDFs are stored with Amazon Redshift and are available to any user when granted the required access. You can find a collection of several ready-to-use UDFs that can be used to implement some of the complex reusable logic within a SQL statement at the following link: https://github.com/aws-samples/amazon-redshift-udfs.

Previous PageNext Chapter
You have been reading a chapter from
Amazon Redshift Cookbook
Published in: Jul 2021Publisher: PacktISBN-13: 9781800569683
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Shruti Worlikar

Shruti Worlikar is a cloud professional with technical expertise in data lakes and analytics across cloud platforms. Her background has led her to become an expert in on-premises-to-cloud migrations and building cloud-based scalable analytics applications. Shruti earned her bachelor's degree in electronics and telecommunications from Mumbai University in 2009 and later earned her masters' degree in telecommunications and network management from Syracuse University in 2011. Her work history includes work at J.P. Morgan Chase, MicroStrategy, and Amazon Web Services (AWS). She is currently working in the role of Manager, Analytics Specialist SA at AWS, helping customers to solve real-world analytics business challenges with cloud solutions and working with service teams to deliver real value. Shruti is the DC Chapter Director for the non-profit Women in Big Data (WiBD) and engages with chapter members to build technical and business skills to support their career advancements. Originally from Mumbai, India, Shruti currently resides in Aldie, VA, with her husband and two kids.
Read more about Shruti Worlikar

author image
Thiyagarajan Arumugam

Thiyagarajan Arumugam (Thiyagu) is a principal big data solution architect at AWS, architecting and building solutions at scale using big data to enable data-driven decisions. Prior to AWS, Thiyagu as a data engineer built big data solutions at Amazon, operating some of the largest data warehouses and migrating to and managing them. He has worked on automated data pipelines and built data lake-based platforms to manage data at scale for the customers of his data science and business analyst teams. Thiyagu is a certified AWS Solution Architect (Professional), earned his master's degree in mechanical engineering at the Indian Institute of Technology, Delhi, and is the author of several blog posts at AWS on big data. Thiyagu enjoys everything outdoors – running, cycling, ultimate frisbee – and is currently learning to play the Indian classical drum the mrudangam. Thiyagu currently resides in Austin, TX, with his wife and two kids.
Read more about Thiyagarajan Arumugam

author image
Harshida Patel

Harshida Patel is a senior analytics specialist solution architect at AWS, enabling customers to build scalable data lake and data warehousing applications using AWS analytical services. She has presented Amazon Redshift deep-dive sessions at re:Invent. Harshida has a bachelor's degree in electronics engineering and a master's in electrical and telecommunication engineering. She has over 15 years of experience architecting and building end-to-end data pipelines in the data management space. In the past, Harshida has worked in the insurance and telecommunication industries. She enjoys traveling and spending quality time with friends and family, and she lives in Virginia with her husband and son.
Read more about Harshida Patel