// 16 Essential AWS Lambda Interview Questions
AWS Lambda is one of Amazon Web Service's tiers of enterprise-level compute services. It enables developers to make their serverless applications more efficient by reducing the need for an always-on service. Such a service dramatically reduces the cost and complexity of cloud services, benefitting both developers and enterprises.
This article will find interview questions that cover important aspects of AWS Lambda. Such questions will help sharpen your knowledge about the popular cloud service. Additionally, it will help you prepare for the right opportunities by showing you what enterprises look for in their cloud developers. So read on!
Looking for Freelance AWS Lambda Developers? Build your product with Flexiple's top-quality talent.
Hire a AWS Lambda DeveloperHire NowLambda is a compute service provided by Amazon Web Services. It runs the code you provide it when a certain event triggers it. The advantage of using AWS Lambda is that it handles all the high-level details such as managing infrastructure resources, successful code deployment, and scalability.
Responding to events in such a manner makes AWS Lambda unique, since other services like EC2 are not designed for this functionality. For example, with EC2 instances, you will have to set up some programs to identify events. You'd also have to manage failure conditions and resource allocation. With Lambda, this use case becomes easy to implement.
Setting up an AWS Lambda requires you to clarify important information like the amount of memory your code will utilize and how long it will take to run. It works with other AWS services and recognizes events triggered by DynamoDB, CodeCommit, and S3 buckets.
Note: Commonly asked AWS Lambda interview questionAWS Lambda supports four popular programming languages: Python, Node.js, C#, and Java. You can use these languages to write your code that performs tasks as Lambda functions.
As a developer, you have to write your Lambda functions in any of these languages and package them for deployment by AWS Lambda. Then, you can supervise its execution and see if you need to fine-tune it.
Before executing your lambda function, AWS Lambda calls a handler function. It can process the triggering event's data and execute other functions or methods.
Here is an example handler function written in Node.js.
exports.myHandler = function(event, context, callback) {
console.log("value = " + event.key);
console.log("functionName = ", context.functionName);
callback(null, "Demo tested successfully");
};
- The first line prints the event object's key attribute to show the trigger event.
- The second line shows the name of your lambda function that it will call.
- Lastly, we have a callback that can return information about errors or results of successful execution back to AWS Lambda.
We can use the AWS CLI to trigger a Lambda function. Let us create a scenario where we start from scratch and set up a Lambda function.
You firstly create a folder called lambda and create a file with the following code.
console.log('Loading function');
exports.handler = function(event, context) {
var date = new Date().toDateString();
context.succeed("Hello " + event.username +
"! Today's date is " + date);
};
We name this file index.js. The code above simply prints a message with the user's name passed as an event and creates a log with the current date.
Next, we need to assign the Lambda function with a minimal IAM role that AWS Lambda assumes and executes the function with. Therefore, we create a policy called demoPolicy.json containing the following content.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
] }
Let us create an IAM role through the create-role command in the code below.
aws iam create-role
--role-name basic-lambda-role
--assume-role-policy-document file://demoPolicy.json
We have to create a deployment package to upload to an S3 bucket. AWS Lambda will execute this as a function automatically. We use the following code to zip the file and its folder.
zip -r demoFunction.zip index.js
Next, we create our Lambda function through the create-function command.
aws lambda create-function
--region us-west-2
--function-name demoFunction
--zip-file fileb://demoFunction.zip
--role arn:aws:iam::00123456789:role/basic-lambda-role
--handler index.handler
--runtime nodejs4.3
--memory-size 128
We have everything set up now and we can test our function by triggering it through the invoke command from the CLI. The output of the invocation is saved in the output.txt file.
aws lambda invoke
--invocation-type RequestResponse
--function-name demoFunction
--region us-west-2
--log-type Tail
--payload '{"username":"YoYo"}'
output.txt
Note: Commonly asked AWS Lambda interview questionYes, you can write asynchronous functions in AWS Lambda. It puts the event in a queue and returns a successful response to you on invocation. However, another process reads the event from the queue and sends them to your function.
To specify asynchronous invocation, you need to specify it in your invoke command. You can do this by setting the invocation-type to Event, as shown in the example below.
aws lambda invoke \
--function-name demoFunction \
--invocation-type Event \
--cli-binary-format raw-in-base64-out \
--payload '{ "key": "value" }' response.json
If the queue gets very long and the event wears out, Lambda discards it from the queue. Therefore, you can handle errors to reduce how many times it retries invocation or discard events quicker.
You can configure your function's asynchronous working through the configuration API provided by AWS Lambda. For example, the update-function-event-invoke-config command updates the function configuration.
Now, let’s say that you want to send a record to an SQS queue if an event can’t be processed. You can do it in the following way.
aws lambda update-function-event-invoke-config --function-name error \
--destination-config '{"OnFailure":{"SQSqueue": "arn:aws:sqs:us-east-2:123456789012:destination"}}'
You can monitor the state of your function and stay updated about its execution by using the logging mechanisms. For example, you can use commands like 'console.log()' and 'console.error()' while using the Node.js runtime.
The logs appear in the CLI along with the console. In the CLI, you can add the '--log-type' parameter to your 'invoke' command. The following code shows an example.
aws lambda invoke
--invocation-type RequestResponse
--function-name myDemoFunction
--log-type Tail
--payload '{"key1":"Demo","key2":"is","key3":"successful!"}'
demoOutput.txt
Here, we have clarified what function we wish to invoke, what kind of invocation it is, and its payload. The logs from this command get written to the AWS CloudWatch Logs.
Another way is to utilize information from the context object and the CloudWatch CLI. Then, you can use the get-log-events command and pass it the relevant information. This is shown in the example below.
aws logs get-log-events
--log-group-name "/aws/lambda/myFirstFunction"
--log-stream-name
"2017/02/07/[$LATEST]1ae6ac9c77384794a3202802c683179a"
You can obtain the --log-group-name and --log-stream-name from the context methods logGroupName and logStreamName.
The context object given to the handler function contains essential information about the function and its execution. For example, the function's name, how much time remains before your function times out, and the stream related to your function. It also comes with helpful methods which you can use within your function like context.succeed() or context.fail().
Some helpful properties and methods of the context object are as follows:
- context.functionVersion: This method returns which version of the function is getting executed.
- context.awsRequestID: This method returns the request ID attached to a certain function and its execution.
- context.callbackWaitsForEmptyEventLoop: This property controls the behavior of the callback() function. If you set it to false, it won't process other tasks in the event loop. Its default value is true, and it only returns to the caller when the event loop closes.
AWS Lambda lets you create different versions of your Lambda functions. The other function versions would exist separately from your published versions so that you can test them individually. Then, you can publish the other versions and make them official.
You can create other versions of a Lambda function through the AWS Lambda API. For example, you can create a new version with the following command.
aws lambda publish-version --function-name demoFunction
You will receive a JSON response in return that will contain the function version number and its unique resource number (ARN), like the example below.
{
"FunctionName": "demoFunction",
"FunctionArn": "arn:aws:lambda:us-east-2:123456789012:function:demoFunction:1",
"Version": "1",
"Role": "arn:aws:iam::123456789012:role/lambda-role",
"Handler": "function.handler",
"Runtime": "nodejs12.x",
...
}
The ARN will be handy when we wish to reference our unpublished version of the function. The following example shows a qualified ARN for a Lambda function which will be unchangeable once you publish this version.
arn:aws:lambda:aws-region:acct-id:function:demoFunction:42
The AWS management console offers a GUI that allows you to test your Lambda functions easily. You can provide the function with relevant input once you have configured the right trigger event. Then, you will save this test, and AWS will invoke your Lambda function.
You need to select the Configure test event option from the Actions drop-down list. Then, you have to provide the input in a JSON format. For example, the following shows how the input looks like for a Lambda function that multiplies two numbers.
{
"num1": "1",
"num2": "5",
"operand": "multiply"
}
The result from your Lambda function is shown in the Execution result area after it is invoked and passed the input.
To access AWS services from a Lambda function, you will need to grant your function a special IAM role called 'execution role.' For example, you can create an execution role so that your function uploads logs to AWS CloudWatch.
Some different AWS-managed policies with varying permissions are described below.
- AWSLambdaDynamoDBExecutionRole: Gives permission to read data from an Amazon DynamoDB stream and write CloudWatch Logs.
- AWSLambdaSQSQueueExecutionRole: Gives permission to read messages from an Amazon SQS queue and write CloudWatch Logs.
- AmazonS3ObjectLambdaExecutionRolePolicy: Grants permission to interact with Amazon S3 Object Lambda and write CloudWatch Logs.
You can use the IAM API to create and manage execution roles. For example, the following command creates a role called demo-lambda-ex through the create-role command, and you provide the policy inline.
aws iam create-role --role-name demo-lambda-ex --assume-role-policy-document '{"Version": "2012-10-17","Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}'
Alternatively, you can provide the policy in a JSON file and pass the file as a parameter to the command.
The function should return the promise object it creates inside the definition. Currently, it is not doing so, and that will cause the state to go from the pending state to nothing.
Once we return this promise object, we can remove the async keyword at the start of the function's definition.
Note: Commonly asked AWS Lambda interview questionA YML configuration file contains all the information about the services, functions, and resources you want to run. The file describes the exact configuration of all the resources you want to use. In the case of AWS Lambda, the serverless.yml file holds the configuration for the Serverless framework.
This framework provides you with a CLI that makes developing serverless functions easier. You can use the Serverless framework and create independent pieces of code (functions). The framework will organize and take care of their execution. Additionally, it will create resources to run the events you define and configure your function to catch these events.
With serverless.yml, AWS Lambda will know how you want to use the framework, which functions to call, which events to look out for, and much more. A serverless.yml for a demo project is given below for better clarity.
service: demo-service
provider:
name: aws
runtime: nodejs6.10
iamRoleStatements:
- Effect: "Allow"
Action:
- "logs:CreateLogGroup"
- "logs:CreateLogStream"
- "logs:PutLogEvents"
Resource: "*"
functions:
demo:
handler: handler.demo
events:
- schedule: rate(1 minute)
environment:
demoEnvVariable: "demo works"
The code is trying to update a specific dynamoDB table asynchronously. It first retrieves the table's name and the log stream from the context to log a successful message on completion. It then creates a mapper of parameters that the update method will need. This mapper includes the update expression and return values.
It passes the mapper to the update method and creates a promise around it. Then, the function waits for this promise to resolve successfully. Once that happens, it creates a response object that it uses to log its success and returns it to the caller.
You can organize your Lambda functions so that they can follow a complex workflow by creating appropriate step functions. First, you would build modular Lambda functions where each focuses on a certain kind of task. Then, you can create a step function that would invoke each Lambda function as per a given condition.
Step functions handle essential aspects such as coordinating different components with each other and retrying in case of failure. Under the hood, step functions use state machines to monitor the state of tasks. Then, they proceed to the next task (and its corresponding Lambda function) according to failure or success conditions.
The AWS management console offers a GUI for you to create your required state machine with the relevant parameters. Once done, it provides a visual preview of your state machine based on the names of states you have provided. An example state machine preview is shown below.

APEX and Caludia.js are the most popular tools to package and deploy Lambda functions. APEX works for functions written in all major languages supported by AWS Lambda. However, Claudia.js works specifically with Node.js.
We first have to install the tool onto our Linux distribution when working with APEX. Next, we must create a directory for our Lambda function with the following commands.
# mkdir workdir && cd workdir
# apex init -r us-east-1
These commands will trigger APEX to handle all the main work, like creating the right IAM role for the function and logging output to CloudWatch.
APEX creates a project.json file with configuration information and a directory to hold your Lambda functions. Once you have placed your Lambda functions, you can use the following commands to deploy and invoke them.
# apex deploy
# apex invoke demoFunction
Working with Claudia.js looks similar to a certain extent. First, we have to give the tool the right privileges. Then, we create a working directory and a simple index.js with code like the one shown below.
# vi index.js
console.log('starting function')
exports.handle = function(e, ctx, cb) {
console.log('processing event: %j', e)
cb(null, { demoFunction: 'hello world' })
}
We deploy this function with the following command.
# claudia create --region us-east-1 --handler index.handle
We then invoke the function with the following command, and it will return a JSON response to us showing the success status and payload.
# claudia test-lambda
Note: Commonly asked AWS Lambda interview questionThe JSON above specifies which function AWS Lambda has to invoke and how the service would handle the errors. The 'Retry' block specifies which kind of errors should prompt Lambda to retry invocation. It also specifies that it shouldn't try more than five times.
The 'Catch' block specifies that Lambda should listen out for a specific kind of error. Once it faces it, it should call the 'CatchException' function and handle the condition accordingly.