Create your first Serverless GraphQL API with AWS Lambda and MySQL Database

Introduction

Before Serverless computing, businesses who own web applications had to own physical hardware and software licences required to run servers. This is a complex and costly arrangement that requires hiring specialized human resources, long deployment times, and budget allocation for updating and upgrading infrastructure resources.

With computing, companies build and run applications without thinking about servers. Infrastructure resources management tasks are handled by the cloud web service provider (e.g. Amazon, Microsoft, Google, IBM, etc.) so companies can focus on only writing code that serves their customers. With Serverless services you get many benefits like faster deployment, cost utilization, automatic scale based on users demand, easier applications development, and more. You can read more about the Serverless services provided by AWS on their .

In this tutorial, we will use Amazon AWS services and therefore all the written codes will be deployed to the AWS cloud.

In this tutorial we are going to build a Serverless GraphQL API application on AWS. You will find the illustrations and discussions thorough with lots of details to help new learners to find all the information they need.

The problem example used in this tutorial is originally taken from this . We will build a simple application to read and create users records including Twitter posts. It will not be a complete application as the purpose of this tutorial is to demonstrate basic concepts and show you how to implement them.

The complete code of the example discussed in the tutorial can be downloaded from this .

This tutorial assumes a basic understanding of Node.js and how to install JavaScript package as well as basic knowledge in JavaScript programming and SQL.

Keywords

Serverless, AWS Lambda, MySQL, Amazon RDS, AWS API Gateway, AWS S3, AWS CloudFormation, AWS IAM, AWS Free Tier, AWS Region, GraphQL, GraphQL Playground, Node.js, YAML, Mutation, Query, Schema, Resolver, Deploy, graphql-yoga, UUID, serverless-offline, serverless-mysql, MySQL Workbench.

Generated using

Learning Goals

Let’s set the learning goals of this tutorial to give it a clear scope, which in return would help us clarifying what we should specifically learn. I came up with 18 learning objectives to direct the planning of my write-up:

The Roadmap

So, how the topics in this tutorial will be presented and in which sequence. I thought it would be a good idea to briefly give highlights about the roadmap used to direct the writing of this tutorial before we start.

I will start by shedding some light on some standard AWS terminology that we will use across the tutorial. Here, I am also including some links for further reading. Then, I will go through the initial setup needed to be able to start developing the application. This includes installing essential software and creating a free account on AWS.

Then, I will take you in a Step-by-Step journey to develop and deploy your application. In the first step, we are going to deploy a default Serverless Function on AWS using the Serverless Framework. With this step, I will introduce you to Serverless Framework and how to verify that you can deploy a service from your client to your AWS account with no errors.

Step-2 is two fold. We will start working with GraphQL towards defining our Schema and at the same time, I will suggest a file structure pattern for organizing folders and files to be used by the GraphQL Server. In the third step, we will install the rest of dependencies required by the application into our root directory.

In the fourth step, we will create our GraphQL Server with the Lambda Function. We will also create the local MySQL Database in order to run our application locally for the purpose of testing. In the last step, we will add the Amazon MySQL Database Instance resource to our Serverless Framework Template and deploy our application on AWS.

The last section is additional. I will show you how to connect your SQL Client to the Amazon MySQL Database remotely to access it directly from anywhere.

To make it more convenient, I am including the diagram below to summarize the whole deployment process that we will go through in this tutorial.

Serverless AWS Architecture Deployment

So, let’s get started…

AWS Terminology

I believe it would be a good idea to highlight the main terms related to AWS services before we start implementing our project. Hence, in this section I am listing some of the terms that will be used throughout the tutorial. I am mostly quoting the definitions from the AWS website itself and including the corresponding links for further reading, which I highly recommend.

  1. : “a compute service that lets you run code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second”. The application that we are going to develop is actually an AWS Lambda Function.
  2. : Amazon Simple Storage Service is “a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web”. “Buckets are the fundamental containers in Amazon S3 for data storage”. The scope of this tutorial does not include explicit creation and management of an AWS S3 Bucket. However, it is indirectly created by our deployment tool (aka. Serverless Framework) to store the uploaded Zip file representing our Function’s code prior to deploying it into AWS Stack.
  3. :AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS”. We will not deal with this service directly as the Serverless Framework tool will generate the AWS CloudFormation Template for us and upload it into AWS.
  4. :a collection of AWS resources that you can manage as a single unit”. In the end, this is actually what will be created on AWS after deployment.
  5. : “AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources”. You need to create an AWS IAM account to be used by your Serverless tool to access your AWS account.
  6. “an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale. API developers can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud”. In our application, we will create an API Gateway to receive the HTTP requests from the user Client and route them to the corresponding Lambda Function.
  7. “Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks”. Amazon RDS supports DB instances running different engine options including MySQL, which is the database engine we are using for our application.
  8. “a physical location around the world where Amazon cluster data centers”. “Each AWS Region consists of multiple, isolated, and physically separate AZ’s within a geographic area”. For us, we will use the default region.
  9. AWS Free Tier: A free offer from Amazon AWS that enables users to build and try things on their cloud for free. You need to have a valid Credit card to enter into your account although nothing will be charged if you maintain things under control. You can read about this offer on their .

Technical knowledge about AWS services is essential to understand how things work during the deployment process. This is why I am emphasizing on visiting different online resources within my writing. This is demonstrated by the many Hyperlinks I am including within the texts. Please don’t be overwhelmed about it and hopefully you will appreciate it with time!

Next, we will do the initial setup to prepare our development environment…

Initial Setup

Before we start coding, let’s prepare our environment by installing the needed software and creating the needed AWS account required before developing and deploying our application. Later, we will add additional JavaScript packages. The below are initially needed:

  • Source-Code Editor: I am using to develop my codes. The advantages that it provides are limitless and it is a Freeware Open Source. It also comes with an ,which we will use frequently throughout the tutorial to run specific commands. It is up to you to use any editor you are comfortable with, but in this tutorial I am assuming that you are using VS Code.
  • Node.js: This is a JavaScript runtime that we need to be able to write and run JavaScript applications on the server. It also provides the Node Package Manager (NPM) that would enable us to install and manage JavaScript packages needed for our application in an easy way. Any Front-end application developer should be familiar with it. Download and install Node.js on your machine from their .
  • Serverless Framework: enables us to develop, deploy, troubleshoot and secure serverless applications efficiently. Allocate some time to learn more about Serverless Framework starting with this . In this tutorial, we will cover some deployment basics using Serverless Framework syntax. Install Serverless globally on your machine using the command npm install -g serverless.
  • AWS account: If you still do not have an AWS account, go to their and create one. You need to provide valid Credit Card details to be able to create the account although nothing will be charged (until it is out of your control 😆).
  • AWS IAM account: AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. For the first time, we need to configure Serverless with the AWS IAM credentials to grant access to the AWS account from the client (your machine). You can read about AWS IAM using this . an example command to configure the default Serverless profile with the AWS IAM credentials is serverless config credentials — provider aws — key 123 — secret 456. Replace the key and the secret values with the real ones for the AWS IAM account you just created. More details can be found at this .

Once all the above are in place, we are ready to start developing and deploying our application. Let’s continue…

Step 1: Create the Default Serverless Service

In this step, we will create a new service in our working directory based on the aws-nodejs . First, create your project directory and then use your terminal to run the command line below from inside your empty directory.

sls create --template aws-nodejs --name serverless-aws-graphql-mysql

In the command line above we are using the serverless sls command to generate scaffolding for a service with AWS as a provider and Nodejs as runtime. I picked the name “serverless-aws-graphql-mysql” for the service. However, you can choose whatever name you want.

After running the command, we should have our scaffolding generated in the current working directory. Refer to this for more details on creating new services using a Serverless Framework Template.

After executing the command, you should have three files created for you as demonstrated in the image below (Let’s just ignore the .npmignore file).

Generated files after executing the Serverless Template create command

What we are actually trying to do here is to create a Serverless Service. In the context of Serverless Framework, a service is like a project. It’s where you define your AWS Lambda Functions, the events that trigger them and any AWS infrastructure resources they require, all in a file called serverless.yml.

Serverless.yml is a file that use YAML structure format, which is a human friendly data serialization standard (Read about it and get yourself familiar with YAML syntax). When you create your initial service, a basic file will be generated for you, which includes a definition of an AWS Lambda Function named “hello” that is ready to be deployed. Read more about the file serverless.yml . You can also refer to the complete list of properties in serverless.yml for the AWS provider in this .

The handler.js file contains your function code. The function definition in serverless.yml will point to this file and its “export” functions as will be demonstrated later.

Now let’s deploy our service with the default “hello” function to our AWS Lambda using the default settings in serverless.yml. This is only to confirm that we can deploy to our AWS account with no issues. Use the below:

sls deploy
The output of a successful Serverless deploy command in the terminal

This is the simplest deployment usage possible. Here, Serverless deployed our default service to the defined provider (aka. AWS) in the default stage (aka. dev) to the default region (aka. us-east-1). One can quickly verify the creation of the lambda function by accessing the AWS Lambda Console as demonstrated in the figure below.

Listing the “hello” lambda function in AWS Console after deploying the service from the Client

The sls deploy command deploys your entire service via CloudFormation. Run this command when you have made infrastructure changes (i.e., you edited serverless.yml). Use serverless deploy function -f myFunction when you have made code changes and you want to quickly upload your updated code to AWS Lambda or just change function configuration.

It is also a good practice to list information about your deployments. One can either list all available deployments in the AWS S3 deployment bucket by running sls deploy list or one can list the deployed functions by running sls deploy list functions. can also be useful when rolling back a deployment or function via serverless .

Using the deploy list Serverless command to verify available deployment files and functions

Now as a test, let’s the deployed “hello” function using the command below and verify that it returns the default response coded in the file handler.js:

sls invoke -f hello

Below is a snapshot of the received response from AWS Lambda in the terminal window, which shows that our invoke command was successful.

Response from AWS Lambda after invoking the function “hello”

Did you notice the .serverless folder created in your project directory after running the deploy command?

So, how it all work?

Let’s stop at this point and try to understand what happened. A description of how deployment works is available at this . Below are the activities run when executing the deploy command:

  • An AWS CloudFormation template is created from your serverless.yml. Any IAM Roles, Functions, Events and Resources are added to the AWS CloudFormation template. In our case at this point, the template defines one Function and the only resource namely, S3 Bucket, which will store the Zip file of your Function code. Later in this tutorial, we will add more resources to be created as part of the AWS CloudFormation Stack.
  • A Zip file for your Function code is created to be uploaded into AWS.
  • The CloudFormation Stack is created on AWS with the new CloudFormation template.
  • The Zip file of your Function’s code and the AWS CloudFormation Template are uploaded to your AWS Code S3 Bucket.
  • The defined resources (The Stack) are created using the definitions in the CloudFormation Template.
  • The AWS Lambda Function is created from the uploaded Zip file.

You need to know that in subsequent deployment commands, Serverless fetches the hashes for all files of the previous deployment (if any) and compares them against the hashes of the local files. Serverless terminates the deployment process if all file hashes are the same (there are no updates). Also, each deployment publishes a new version for each Function in your service, which would help to rollback to any previous deployment version when needed.

The concept detailed above is referred to as “Infrastructure as Code”. What was used by Serverless Framework is AWS CloudFormation, which “gives you an easy way to model a collection of related AWS and third-party resources, provision them quickly and consistently, and manage them throughout their life cycles, by treating infrastructure as code”. You can read about it on the official AWS .

AWS CloudFormation — How it Works?

Congratulations! you have just deployed and ran your first Function on AWS and also gained some basic understanding of the Serverless Deployment process. Let’s proceed to Step 2…

Step 2: Define the GraphQL Schema

The main objective of this section is two fold. At one point we will create our application GraphQL Schema and at the same time I will suggest a file structure pattern for organizing folders and files to be used by the GraphQL Server.

First let’s create the empty folders and blank files in an organized manner. In your project, create the schema file in the root directory and name it schema.gql. It is up to you to chose the name you want but for me I prefer this name as it is simple and self explanatory.

Next, create the folders and the empty files inside them, which will collectively be used to maintain the API codes related to sending and receiving data. What is suggested below is not a must but it would be, always, a good idea to meaningfully organize your files in well defined and structured folders system. The purpose of each folder and file will be clearer as we move forward.

Do the following:

  1. Create the Resolvers folder in the project root directory.
  2. Inside the Resolvers folder, create three sub-folders with the names:Common, Mutations, Queries.
  3. Inside the Common folder, create an empty Javascript file and name it mysql_common.js.
  4. Inside the Mutations folder, create an empty Javascript file named mysql_createUser.js.
  5. Inside the Queries folder, create an empty Javascript file named mysql_getUser.js.

The below image shows our project directory listing at this point.

Project Directory Listing after Step 2

Define the GraphQL Schema

For GraphQL applications, it all start by defining the GraphQL Schema. So, let’s start by defining the schema of our GraphQL application using the GraphQL type system syntax. You can refer to the GraphQL documentation page to learn more about how to write schemas using the link: .

Add the first two items to the schema: a User and Post object types. Object types are the most basic components of a Schema, which define the kind of objects one can fetch from the GraphQL service, and what fields each object has. Here, we are defining an object type that represents the user who can have multiple posts. The second object type represents a post. In the GraphQL Schema language, we might represent the two items as follows:

type User {
UUID: String
Name: String
Posts: [Post]
}
type Post {
UUID: String
Text: String
}

The above definitions show what the User and Post objects look like. Later, we’ll work on storing and retrieving these objects to and from a database.

As can be seen, the User object is represented by three fields: the unique identifier, user Name, and the user posts saved in an Array. The Post object is represented by: a unique identifier and the post text. Let’s continue by defining the Mutations and Queries for our API (for our example, we will have only one Mutation and one Query). Every GraphQL service has a Query type and may or may not have a Mutation type.

Now before we define our Query and Mutation, let’s create two Input types for them. Although we can use the User and Post types directly in queries and mutations, it is considered a good practice to create Input types instead, to enhance readability and simplicity. You can read more about Input types . Add two Input types, one for the users and one for the posts:

input UserInput {
Name: String
Posts: [PostInput]
}
input PostInput {
Text: String
}

define the data fetching operations used by our GraphQL API. So, let’s define a Query type with one query namely, mysql_getUser. The query accepts the user’s UUID as a String (guess what does the character ! mean 😬) and returns the User object that contains the following: UUID as a String, user Name as a String, and an array containing all the user’s pre-created posts. Add the below lines to your schema.gql file.

type Query {
mysql_getUser(uuid: String!): User
}

We also want to provide a way to create users in our application. represent the operations used by the GraphQL API to modify the data stored in the database either by updating existing data records or by adding new data records. For our purpose, we will define a Mutation type that has only the mysql_createUser Mutation. The mutation accepts the userInput Input and returns a User object:

type Mutation {
mysql_createUser(input: UserInput!): User
}

Finally, define the schema object and link it to the Query and Mutation types we already defined by adding the below lines:

schema {
query: Query
mutation: Mutation
}

Now, we have the full description for our GraphQL API! You can see the complete file .

Before you proceed; install the dependencies required by our application following the next step.

Step 3: Installing the Dependencies

Our application requires some dependencies to be installed. First, make sure that you did the initial setup described earlier and then run the below list of commands to install the JavaScript packages that we need for our node application:

  1. npm i graphql-yoga
    This command will install the GraphQL Server that we need to process all API requests. Although there are many GraphQL Servers that can be used, for the purpose of this tutorial I am suggesting to use graphql-yoga for ease of setup, performance & best developer experience especially for beginners. Other platforms like can be more useful for your future projects! After installation, notice that a folder with the name node_modules is created that contains all the dependencies needed by graphql-yoga. Other dependencies will be added to this folder once the next packages are installed. For you, you should not bother with this folder as you will never have to directly interact with it since NPM will manage everything for you.
  2. npm i serverless-mysql
    This command will install the Serverless version of the famous Node.js MySQL driver called . This module will enable us to manage MySQL connections at serverless scale.
  3. npm i uuid
    This command will install the package that we will use to randomly create UUIDs in our application. As described earlier, a will be associated with each User and each Post.
  4. npm i serverless-offline --save-dev
    This command will install the plugin that emulates AWS Lambda and API Gateway on your local machine to speed up your development cycles. The --save-dev flag will add the package to the development dependencies of your project. After running the command, open your project’s serverless.yml file and add the following entry to the plugins section: serverless-offline. If there is no plugins section, you will need to add it to the file. Note that the plugins section for serverless-offline must be at the root level on serverless.yml. It should look something like the below:
plugins:
- serverless-offline

You can check whether you have successfully installed the plugin or not by running the serverless command line: serverless --verbose. The console should display Offline as one of the plugins now available in your Serverless project.

run the command npm install from your terminal and the console should display serverless-offline as one of the plugins now available in your Serverless project.

Okay, what is next? In the next step, we will write the code for the GraphQL Resolvers and test the application locally on our machine prior to deploying our application on AWS cloud. This will require installing a MySQL Server and creating a database on our machine. Let’s continue…

While developing your application, keep in your mind the . By preparing our application to be tested offline we are actually supporting testability.

Step 4: Create the Local MySQL Database and run the Application Locally

Before we proceed with creating our Amazon RDS resource (i.e. Amazon MySQL Instance) on AWS, we will test the application locally using a local MySQL database. This is possible by using the serverless-offline plugin we previously added to our project.

First, let’s create our local database. Install MySQL on your machine using the on the formal MySQL website. I am using the MySQL server installed as part of Windows web development environment. You can also do the same or go to other options like or . Also, make sure to install as we will use it to access our database for the purpose of verification.

Open your MySQL Workbench and create a new MySQL connection using the simple steps used and then create a local MySQL database named graphql_db_local1 (or you can choose any meaningful name) using the default connection. The default username in this case would be root with no password required to connect to the created database.

Not having a password in this case is okay since our database is local on our machine. However, for the AWS MySQL Instance, connecting to the database will be a different story as you will see in the next section.

We need to keep our configuration information, which includes passwords and keys, separate. Rather than having them hard-coded in the different code files, these secrets can be read from configurations files. This is a recommended good practice! So let’s implement it.

Create the Config folder in the project root directory. Create two configuration files inside the folder: config.dev.yml that will contain the AWS cloud configurations and config.offline.yml that will contain the local configurations to connect to the local database. We will keep the config.dev.yml empty for now.

If you are using Git, remember to exclude committing the Config folder to GitHub by adding the /Config line in the .gitignore file.

Add the configurations below in the config.offline.yml file:

MYSQL_DB_NAME: graphql_db_local1
MYSQL_DB_USERNAME: root
MYSQL_DB_PASSWORD:
MYSQL_DB_HOST: 127.0.0.1
MYSQL_DB_PORT: 3306
NODE_ENV: dev
REGION: us-east-1

Open your serverless.yml file and add a new section called custom just before the provider section and add the line that reads from the configuration file as follows:

custom:
#################################################################
## Cloud Deployment Config ##
#################################################################
## run sls deploy --stage prod to select the config.prod.yml
## run sls deploy --stage dev to select the config.dev.yml
#secrets: ${file(./Config/config.${opt:stage,self:provider.stage, 'dev'}.yml)} #################################################################
## Local Deployment ##
#################################################################
secrets: ${file(./Config/config.offline.yml)}

In the above example, we are defining the secrets variable that represents the configuration object being read from the corresponding configuration file. Variables allow us to dynamically replace configuration values in serverless.yml. You can read more about variables .

Later, when we deploy to AWS we will have the option to deploy either to the production or the development (default) environment. For now, we are commenting the line related to reading from cloud configuration file and use the secrets variable that reads from the local configuration file. I hope that the concept of isolating configurations we used here is clear to you.

Add the region and environment configurations inside the provider section in serverless.yml file as shown below:

region: ${self:custom.secrets.REGION}environment:
NODE_ENV: ${self:custom.secrets.NODE_ENV}
#mysql
MYSQL_DB_NAME: ${self:custom.secrets.MYSQL_DB_NAME}
MYSQL_DB_USERNAME: ${self:custom.secrets.MYSQL_DB_USERNAME}
MYSQL_DB_PASSWORD: ${self:custom.secrets.MYSQL_DB_PASSWORD}
MYSQL_DB_HOST: ${self:custom.secrets.MYSQL_DB_HOST}
MYSQL_DB_PORT: ${self:custom.secrets.MYSQL_DB_PORT}

Basic knowledge in SQL syntax is required to understand the SQL queries used in the following examples. A good resource can be found .

mysql_common.js

Now let’s add the required code for our GraphQL Resolvers. First add the code below in the /Resolvers/Common/mysql_common.js file:

exports.init = async (client) => {
await client.query(`
CREATE TABLE IF NOT EXISTS users
(
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
created DATETIME DEFAULT CURRENT_TIMESTAMP,
uuid CHAR(36) NOT NULL,
name VARCHAR(100) NOT NULL,
PRIMARY KEY (id)
);
`);
await client.query(`
CREATE TABLE IF NOT EXISTS posts
(
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
created DATETIME DEFAULT CURRENT_TIMESTAMP,
uuid CHAR(36) NOT NULL,
text VARCHAR(100) NOT NULL,
user_id MEDIUMINT UNSIGNED NOT NULL,
PRIMARY KEY (id)
);
`);
};
exports.getUser = async (client, uuid) => {
var user = {};
var userFromDb = await client.query(
`
select id, uuid, name from users where uuid = ?
`,
[uuid]
);
if (userFromDb.length == 0) {
return null;
}
var postsFromDb = await client.query(
`
select uuid, text from posts where user_id = ?
`,
[userFromDb[0].id]
);
user.UUID = userFromDb[0].uuid;
user.Name = userFromDb[0].name;
if (postsFromDb.length > 0) {
user.Posts = postsFromDb.map(function (x) {
return { UUID: x.uuid, Text: x.text };
});
}
return user;
};

This file provides two “export” functions:

  1. the init function simply creates our tables if they do no exist in the database. This will happen only in the first time we connect to our database. One can choose to create the tables manually on the database using, say, MySQL Workbench. However, having the init function will help us during the development phase as the database may be required to be initialized multiple of times, which would make it a convenient technique.
  2. the getUser function that will be used by both the Query and Mutation functions as will be seen later. This function accepts two arguments namely, client and uuid. The client argument holds the MySQL connection object needed to connect to our MySQL database. The uuid argument holds the UUID of the user we want to get from the database. The function returns the user object or null if the user does not exist. The user object has three fields namely, UUID, Name, and the Posts array.

Explanations specific to the Javascript language is left to the reader to understand. A basic tutorial about JavaScript is provided by

mysql_getUser.js

Add the code below to the empty mysql_getUser.js file residing in the Queries sub-folder, which represents a GraphQL Query:

var common = require("../Common/mysql_common");
const Client = require("serverless-mysql");
exports.func = async (_, { uuid }) => {
var client = Client({
config: {
host: process.env.MYSQL_DB_HOST,
database: process.env.MYSQL_DB_NAME,
user: process.env.MYSQL_DB_USERNAME,
password: process.env.MYSQL_DB_PASSWORD,
},
});
await common.init(client);
var resp = await common.getUser(client, uuid);
client.quit();
return resp;
};

This file contains only one function that accepts the UUID as an argument, which is forwarded to the getUser function defined in the Common folder to get the user object. Prior to calling the getUser function, the MySQL connection object namely, client is created to be passed as a second argument with UUID. Also, the init common function is called prior to calling getUser to make sure that the tables are created in the database before reading from them. One can decide to skip calling the init function later once the application is deployed to the production and all tables are created in the Amazon MySQL Database.

It is important to close the database link using the serverless-mysql quit method after each SQL query.

mysql_createUser.js

Add the code below to the empty mysql_createUser.js file residing in the Mutations sub-folder, which represents a GraphQL Mutation:

const { v4: uuidv4 } = require("uuid");
var common = require("../Common/mysql");
const Client = require("serverless-mysql");
exports.func = async (_, obj) => {
var client = Client({
config: {
host: process.env.MYSQL_DB_HOST,
database: process.env.MYSQL_DB_NAME,
user: process.env.MYSQL_DB_USERNAME,
password: process.env.MYSQL_DB_PASSWORD,
},
});
await common.init(client);
var userUUID = uuidv4();
let user = await client.query("INSERT INTO users (uuid, name) VALUES(?,?)", [userUUID, obj.input.Name]);
for (let index = 0; index < obj.input.Posts.length; index++) {
const element = obj.input.Posts[index];
await client.query("INSERT INTO posts (uuid, text, user_id) VALUES(?, ?, ?)", [uuidv4(), element.Text, user.insertId]);
}
var resp = await common.getUser(client, userUUID);
client.quit();
return resp;
};

The file defines one function that can be called to create a new user in the database. The function returns the new user information as an object. This function should receive the name and the list of posts of the user from the GraphQL request.

Prior to inserting the information in the database, the function uses the uuid library to generate a unique identifier for the user as well as for each post object. Notice that the common getUser function is called to return the created user information object to the caller right after creating the user with the posts sent with the GraphQL HTTP request.

handler.js

Lastly, update the code in the handler.js file as per the below:

"use strict";const { GraphQLServerLambda } = require("graphql-yoga");
var fs = require("fs");
const typeDefs = fs.readFileSync("./schema.gql").toString("utf-8");const resolvers = {
Query: {
mysql_getUser: require("./resolver/Query/mysql_getUser").func,
},
Mutation: {
mysql_createUser: require("./resolver/Mutation/mysql_createUser").func,
},
};
const lambda = new GraphQLServerLambda({
typeDefs,
resolvers,
});
exports.server = lambda.graphqlHandler;
exports.playground = lambda.playgroundHandler;

As mentioned earlier, the handler.js file should contain your function code. The function definition in serverless.yml will point to the handler.js file and its “export” functions (the last two lines in the code above).

The handler file actually creates the GraphQL Server that will be used by our Lambda Function. It uses the defined Resolvers and Schema to instantiate the GraphQL Server for our application. In the above code, two functions are defined. The first one is the actual graphql handler or the GraphQL API itself, while the second function is the playground handler, that provides the GraphQL IDE that we will use to experiment and test our GraphQL API.

As you may noticed, two objects were used to create the GraphQL Server namely, the TypeDefs which represents the Schema itself and the resolvers. More parameters can be used such as Context and Middlewares but for the scope of this tutorial, we don’t need them.

Configuring the Serverless Functions & the API Gateway

At this point, our GraphQL API code is ready. However, we need to add the definitions of the two functions in our Serverless Framework Template. Also, we need to define the resource that routes HTTP requests to the corresponding function. Other than routing requests, the AWS API Gateway would triggers our functions in the event of receiving an HTTP request.

Update the serverless.yml file by adding the functions section using the code below:

functions:
graphql:
handler: handler.server
events:
- http:
path: /api
method: post
cors: true
playground:
handler: handler.playground
events:
- http:
path: /api # a must!
method: get
cors: true

In the above code we are configuring two things. One, is the link to the GraphQL Server and Playground functions, which is straightforward. Second, is the HTTP endpoints, which is something I need to explain further.

AWS provides ways to define events that would trigger specific resources. One example is the event of receiving an HTTP request from a client. For this particular event, we need to create a web API with an HTTP endpoint for our Lambda Function by using Amazon API Gateway. This resource would accept the HTTP request and route it to the corresponding Lambda Function. For the purpose of this tutorial, we will not secure our API and keep it open to serve traffic over the internet since I assume that you will drop it after completing the tutorial. You can read more about the AWS API Gateway service by following this .

Serverless Framework provides an easy AWS API Gateway Events Syntax to create HTTP endpoints as Event sources for the AWS Lambda Functions (Check the events property in the above code). More details can be found on this .

At this point, we are ready to deploy our API Function with all the resources needed so far.

Running the Application Locally

Now let’s see if all will work on our local machine. Remember to make sure that your MySQL server is running and you are reading the config.offline.yml file from your serverless.yml. Also, use the MySQL Workbench client to list your local database and notice that it does not initially contain any tables. Remember that the init common function we discussed before will create the two tables for us upon running the first GraphQL query since these tables do not initially exist.

Listing our Local Database before our first GraphQL query

Now let’s turn on our Serverless Offline Server by running the command below:

sls offline
Successfully running the offline Serverless Server

If everything go fine and no errors are identified by Serverless, you should receive the above output screen.

After running the command successfully in the Terminal, open your browser and enter the URL http://localhost:3000/dev/api in the address bar (this is an http get operation) to open the GraphQL Playground and start testing your GraphQL Mutation and Query resolvers.

Run the below Mutation to create the first user and verify that the new records are added in the two tables:

mutation {
mysql_createUser(
input: {
Name: "Shadi"
Posts: [
{ Text: "Lorem ipsum dolor sit amet." }
{ Text: "Proin consequat mauris." }
]
}
) {
Name
UUID
}
}
Running our Mutation locally in GraphQL Playground
Local Database view after running the GraphQL Mutation

Now, run the below Query to get the information of the newly created user. Use a separate Tab for this. The UUID here is copied from the response received after successfully running the above Mutation.

query {
mysql_getUser(uuid: "efcea128-aee3-4d6f-87fb-008a4fae4dcf") {
Name
UUID
}
}
Running our Query locally in GraphQL Playground

Now, after running our Lambda Function locally and verifying that all are working fine on our machine, it is time to deploy our API application on AWS.

Let’s move on!

Step 5: Create the Amazon MySQL Database Instance and run the GraphQL Lambda API on the internet

In this section, we will create an Amazon MySQL Database Instance using the db.t2.micro included in , which is free for 12 months (at the time of writing this text). The computation and memory capacity of this DB instance is: 750 Hours per month of database usage - 20 GB of General Purpose (SSD) database storage - 20 GB of storage for database backups and DB Snapshots. For the purpose of this tutorial, these specifications doesn’t matter actually and the database instance should be removed after doing this exercise.

You need to know that the 750 Hours per month mentioned above includes the times where your database instance is idle. I normally stop it from the AWS RDS Console if I need to pause my work but also need to keep it for some time. Amazon will turn it on after seven days so you may need to stop it again afterwards 😶!

In our offline exercise, we only needed a MySQL database to read and write data via the API. In contrast, here we will create an , which is an isolated database environment running in the cloud. This environment includes your database besides other infrastructure resources. You can read about how to manually create an AWS MySQL Database Instance following this . In our case, all the environment setup will be done for us by AWS CloudFormation and therefore you shouldn’t be bothered with the details for now.

So, let’s use the Serverless Framework to define our RDS Instance service in a separate file that our serverless.yml file will read.

Do the following:

1. Create a folder in the root directory and name it Resources. This folder will contain all the services YML files that will be used by your API application

2. Create a new file named mysql_RDS_Instance.yml inside the Resources folder. This file will contain the AWS CloudFormation syntax to create our MySQL resource under the AWS Free Tier package. Add the below lines in the file:

Type: AWS::RDS::DBInstance
Properties:
DBInstanceIdentifier: ${self:custom.secrets.MYSQL_DB_IDENTIFIER}
MasterUsername: ${self:custom.secrets.MYSQL_DB_USERNAME}
MasterUserPassword: ${self:custom.secrets.MYSQL_DB_PASSWORD}
AllocatedStorage: 20
DBName: ${self:custom.secrets.MYSQL_DB_NAME}
DBInstanceClass: db.t2.micro
Engine: mysql
EngineVersion: "8.0.20"
PubliclyAccessible: true

This file will be parsed as if it is part of the serverless.yml file. We are separating files just to make the project directory more organized and more clear.

3. Update the config.dev.yml file with the below:

MYSQL_DB_IDENTIFIER: graphql-db
MYSQL_DB_NAME: graphqlDB
MYSQL_DB_USERNAME: master
MYSQL_DB_PASSWORD: pA$$w0rD321
MYSQL_DB_HOST:
Fn::GetAtt: [MySqlRDSInstance, Endpoint.Address]
MYSQL_DB_PORT:
Fn::GetAtt: [MySqlRDSInstance, Endpoint.Port]
NODE_ENV: dev
REGION: us-east-1

Notice that we added a new property named DBInstanceIdentifier to give a name/ID to our database instance. If this is not specified, then it is okay as AWS will generate a random identifier for the database instance. In AWS, Identifiers must begin with a letter; must contain only ASCII letters, digits, and hyphens; and must not end with a hyphen or contain two consecutive hyphens. Database names accept only alphanumeric characters.

4. Add the resources section in the serverless.yml file as per the below (Read about Serverless Resources using this ):

resources:
Resources:
MySqlRDSInstance: ${file(./Resources/MySql_RDS_Instance.yml)}

Also, add the MYSQL_DB_IDENTIFIER environment variable to the other variables we previously defined:

environment:
NODE_ENV: ${self:custom.secrets.NODE_ENV}
#MySQL
MYSQL_DB_IDENTIFIER: ${self:custom.secrets.MYSQL_DB_IDENTIFIER}
MYSQL_DB_NAME: ${self:custom.secrets.MYSQL_DB_NAME}
MYSQL_DB_USERNAME: ${self:custom.secrets.MYSQL_DB_USERNAME}
MYSQL_DB_PASSWORD: ${self:custom.secrets.MYSQL_DB_PASSWORD}
MYSQL_DB_HOST: ${self:custom.secrets.MYSQL_DB_HOST}
MYSQL_DB_PORT: ${self:custom.secrets.MYSQL_DB_PORT}

Make sure that you are reading the config.dev.yml configuration file from the serverless.yml file. In the offline exercise, we were reading from the file config.offline.yml.

Now, run the sls deploy command again and verify that the Amazon MySQL Database Instance is created on the AWS cloud.

Please note that it could take several minutes for the new database instance to become available and only then, your serverless command execution will finish. The new database instance appears in the list of database instances on the AWS RDS Console. The database instance will have a status of creating until the instance is created and ready for use. Only once the state changes to available, you can connect to the database.

Successfully running the Serverless deploy command

After successfully running your deploy command, notice the endpoints URLs returned by AWS. These are the http addresses we can use to query our GraphQL Lambda. Before you try to run any Mutations or Queries, go ahead and visit the RDS AWS Console to verify that the database instance is created with the configurations that we set up in our mysql_RDS_Instance.yml file.

You can also get the endpoint URL of the GraphQL API directly from the AWS API Gateway Console.

The below screenshots show that the database instance graphql-db was created. I also included a screenshot showing the configuration of the database instance from the AWS RDS Console just to show the different settings applied.

Database instance creation on AWS after executing the Serverless deploy command
Verifying the configurations of the newly created Database Instance on AWS

It worth mentioning that users who have access to the Amazon RDS Console will not be able to verify the database password from this console and this is justifiable! For security reasons, the AWS system is very reluctant to showing secrets. If you lose the password, the only way to recover it is by resetting the password either manually from the AWS database instance console or by using the alternative ways (see this ). This is a proper precaution as there will be no way for anybody, even an AWS privileged administrator, to have access to your passwords. It seems this is normal for cloud services systems and we need to embrace it. On the other hand, and since we have access to AWS Lambda Console, we can reveal the password by selecting our Lambda Function and go to the “configurations” section then selecting the “Environment Variables” tab. The password is hard-coded within the application since the Function needs it to be able to connect to the MySQL database.

Another point that is worth mentioning is related to managing secrets. If you notice, our database password is hardcoded in our application and, at the same time, hidden in the AWS cloud. This kind of arrangement is not the best in a real situation for two reasons. First, the password can be shared with the application by mistake. Second, the process of updating the password is manual and requires not only resetting it on AWS console but also modifying it in all the applications that use the database. Therefore, AWS provides a paid service called AWS Secrets Manager to help you “protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle”. To read more about this service, see this .

Let’s check our Lambda Function on the AWS console and verify that what we defined in the serverless.yml file are reflected. Check the “Environment Variables” and “Triggers” in the “configurations” section. Also, check the “Versions” section to track deployment trail.

Lambda Function Environment Variables on AWS
Lambda Function Triggers on AWS
Lambda Function Deployed Versions

At this point, you managed to create a MySQL Database Instance with Amazon RDS. Now let’s move on and visit our GraphQL Playground page on AWS cloud using the endpoint URL returned by the sls deploy command.

Try out the Mutation you previously used in the offline exercise to create your first user and the associated posts. The fact that the user information is returned after executing the Mutation indicates that the user records were created in the database after creating the two tables as explained before. Also, run the Query used previously in the offline exercise (use the new UUID) to verify that it is working.

Running our Mutation on the cloud
Running our Query on the Cloud

If you are receiving the responses depicted by the two screen images above then, Congratulations 🥳! You just implementing your first GraphQL API using AWS Gateway, AWS Lambda, and Amazon MySQL Database Instance.

Complete Project Directory Listing

Remember to remove your Serverless deployment once you’ve finished trying out the steps described in this tutorial so that you don’t face a situation that you have to pay for database resources you’re no longer using. You can remove the stack created in this tutorial by running the command npm run remove in the root directory of the project.

At this point, we should conclude the tutorial. However, I would like to add an additional section to teach you how to remotely access the Amazon MySQL Database via a client. Let’s quickly try to do that!

Connect to the Amazon MySQL Database using an SQL Client

Once you verify that the database instance is created and the status on the AWS console changes to available, you can connect to your database using any standard SQL client. For us, we will continue using MySQL Workbench, which we already used in the previous offline exercise.

Create a new connection to connect to the newly created Amazon MySQL Database using the credentials we defined in the config.dev.yml file. The host name (basically the instance endpoint) and port are specified in the AWS console as demonstrated in the image below.

Creating a new connection on MySQL Workbench to connect to our Amazon MySQL Database

When using the Test Connection button, the confirmation window shown below should pop up.

Successful Connection Test

If your test was not successful on the first time, refer to this to troubleshoot and resolve. Most probably, you will not be able to connect at the first time as your Amazon DB instance is in a private subnet by default and therefore you will not be able to connect to it from your local machine. You can resolve this issue by switching to a public subnet.

Be careful! We configured our DB instance to be publicly accessible and hence, will have an associated public address. Using a public subnet will make all the resources on the subnet (including other databases) accessible from the internet. This solution can compromise security requirements in a real professional scenario in which the AWS Site-to-Site VPN is the right alternative method. You can read about this AWS paid service .

You can switch to public subnet by adding new Inbound rules to your VPC Security Group in the database console (see this to learn more). The image below shows the Inbound rules for the VPC Security Group in my AWS Console.

VPC Security Group Added Inbound Rules to connect from Anywhere
Connecting to Amazon MySQL database via the Workbench Client

Congratulations! You can now connect to and manage your AWS MySQL Database from anywhere using your SQL client. Always remember, this approach is not recommended for published environments where access security rules should be more strict as explained above.

Conclusion

Developing and deploying a Serverless GraphQL API is a straightforward process but requires a certain level of knowledge base to understand how all work together. In this tutorial, we went in a journey to learn the steps needed to successfully deploy a Serverless GraphQL API application on AWS. The level covered here is Basic and should help anyone who wants to learn about the topic for the first time.

The next tutorial in the pipeline will build on the current one to add the security layer for the application to control and manage Authentication and Authorization. Please follow me to be notified once the new tutorial is released.

Lastly, I hope that you found some value in this tutorial and that it helped you to develop basic understanding about the process of deploying a Serverless GraphQL API with MySQL database in AWS.

All the best!

I appreciate helping me by evaluating my writing and providing your feedback using this . This will help me to adjust and improve my writing style to make it more engaging and interesting for the reader.

References











I am a Web Development enthusiast, Educator, and Innovator. I hold a Master degree in Computer Engineering and a PhD degree in Education.