Deployment to the Web via AWS EC2
In this section we’re going to move onto deploying our website which we’ve been developing throughout this course onto a web server. By performing the deployment, your website will be publicly made available on the Internet.
There are many ways we can approach and ultimately achieve hosting our site. The simplest way would be to purchase ‘web hosting’ from a provider; there are many companies which solely provide web hosting as their primary service.
1. Understanding Web Hosting
Once we purchase hosting from a web hosting provider, we’re often provided with a Control Panel (CP) which is a Graphical User Interface (GUI) accessible within the web-browser. Using the CP we can add domains, emails, upload our site’s source code for hosting (via File Manager, FTP or Git Integration), setup Cron Jobs (i.e. scheduled tasks), add databases and so on. They provide all we need to configure our host with an easy to use GUI, which we can perform with a few clicks using a mouse/cursor. Two popular control panels for consumer hosting management are cPanel and Plesk.
When you purchase hosting from a web hosting provider you will need to choose a package, each package will have their own quotas and limitations. Most hosting providers offer shared hosting as one of their more affordable packages. The types of hosting include:
- Shared hosting is where the provider has a dedicated server connected to the Internet, but they host multiple clients on the single server (hence the term shared). Even when you purchased shared hosting, your host will appear to be completely isolated from anyone else on the server.
- Business hosting (depending on the provider) will be a similar case of shared hosting, but they may limit the amount of hosts they allow onto a server (e.g. 50 users maximum). Additionally they may provide you with a dedicated IP address to make it appear to public visitors (and search engines) that you’re on a dedicated host, as your IP address is unique and not used on any other site.
- Virtual Private Server (VPS) hosting is where the provider will partition their dedicated server and you will effectively be given full access to the partition, including limited server resources (e.g. 2 cores, 128MB RAM and etc.). The provider may limit the amount of VPS’ they offer per dedicated server (e.g. 10 users maximum). You’d also be provided a dedicated IP address with VPS hosting and you may optionally purchase additional IP addresses.
- Dedicated hosting is where you have a full server, including all resources unlimited to yourself for your own site/application. No other user/host is using the resources on this server and it is dedicated completely for your use.
Dedicated may seem “overkill” for hosting a single site and generally it is, however depending on the amount of traffic you have visiting your site and with the capabilities on offer (e.g. multiple large downloadable files, API services which require computing power, read/write to databases) you may require a dedicated server. For example, websites such as Microsoft and Adobe will require multiple dedicated servers and their downloads maybe hosted on a CDN.
Whilst I was working for a national company with retail stores across the UK, I had developed and continued to maintain an application which was used by all customers in the stores (over 10k unique users daily) which would perform read and write operations to a database each second and was integrated with the POS system. This application was hosted on:
- 3x Dedicated Servers – for the application.
- 2x Dedicated Servers – for the database.
- 1x Load Balancer – to distribute traffic amongst the application servers proportionately.
And even on promotional days such as ‘Black Friday’ the infrastructure was still unable to handle demand and required additional computing power. On regular days however the setup was adequate enough with resources free.
2. Cloud Computing
With large scale applications (such as the one described above with multiple stores across the UK) which is unable to meet demand/traffic, additional resources need to be provisioned for the server/infrastructure. The process of provisioning hardware and having it installed can take months, but thankfully with the introduction of cloud computing we can make changes within minutes and sites/applications can automatically scale up with more powerful hardware when necessary and scale down (to save costs) when the resources are not required.
Websites such as Google, YouTube, Amazon, Netflix amongst others all utilise cloud computing. The most popular cloud service providers are (in order, from most to least popular):
- Amazon Web Services (AWS)
- Microsoft Azure
- Google Cloud Platform (GCP)
They all provide Infrastructure as a Service (IaaS) and what this means is that we don't need to manage our own physical servers, have them provisioned and setup in a rack, in a data centre somewhere. We can login to the cloud console, go through the setup wizard and in a few minutes we'll have our own server setup with a dedicated IP address. We no longer need to wait months for servers to get setup; in large organisations in the past I've waited years for Purchase Orders (POs) to be raised, servers to be racked, mounted, network to be mapped and configured behind a DNS. Cloud computing allows us to provision new hardware in a few minutes and we can terminate instances whenever we want. We don't have to pay upfront costs and we only pay for the resources we use as a monthly bill.
For our deployment we’ll be using AWS and we’re going to spin up a server which runs Linux. There are no GUIs for going down this route unlike purchasing a hosting package from a hosting provider, and although more complicated it enables us to have complete control of our own hardware. Although we can provision hardware via GUI, we must configure them via the CLI (we’ll use Git Bash on Windows for this).
Personally however I prefer to manage servers via the CLI rather than using a GUI Control Panel, as it provides additional capabilities such as using Redis to cache data to the RAM, the ability to use OpenSSL/Certbot for generating SSL/TLS certificates, using SSH for git cloning and much more.
3. Spinning up an AWS EC2
Servers on AWS are referred to as EC2, which stands for Elastic Cloud Computing. EC2 instances can be provisioned with any operating system (OS) and we will need to configure our server to become a web server. AWS also offer a plethora of varying services, from servers, databases, analytics tools, email, robotics, IoT, AI, satellites and much more and they’re always constantly expanding too.
Please visit the AWS page and create yourself a free account to get started with using their services. Throughout this section we will only provision hardware which falls under the free tier (on a new account) therefore you will not be charged.
Once you login to the AWS console, you’ll be greeted to a dashboard. On the top-right header bar of the console, ensure your location is set to “Europe (London) eu-west-2” and this will ensure that all hardware provisioned will be in the London/eu-west-2 region. (AWS are always expanding their availability locations too.)
3.1. Provisioning an EC2 instance
- In the AWS console, you’ll see a “Services” drop down on the top-left header bar. Select this and under ‘Compute’ select ‘EC2’. On the new page, select the orange ‘Launch Instance’ button.
- Set the OS to “Amazon Linux 2 AMI (HVM), SSD Volume Type” (64-bit x86). This is a unix OS based on RHEL maintained by AWS.
- For the instance type, select ‘t2.micro’ then choose ‘Next’.
- We can ignore all the options on the ‘Configure Instance’ page, so just click ‘Next’.
- On the ‘Add Storage’ page we do not need to add additional storage devices, we’ll accept the default SSD of 8GB. Again we can click ‘Next’.
- We can ignore the ‘Add tags’ page, so click ‘Next’ again.
- Now we need to configure the ‘Security Group’. Choose “Create a security group” and give it an appropriate name and description (e.g. "PublicWebServer"). Use the 'Add Rule' button to add more TCP rules and add 'HTTP', 'HTTPS' and 'MYSQL/Aurora'.
- Finally hit “Review and Launch” and then “Launch.”
- You’ll be presented with a pop-up regarding a ‘Key pair’ which you will need to have in order to login to the server. Create a new key pair, set the “Key pair name” to your project name (e.g. "exertion", "learning" etc.), download the key-pair and then click “Launch instances.”
In a few minutes your server will be up and ready. If you now go into Services -> EC2 -> Instances (running) in the AWS console, you’ll see your instance. Clicking it will present additional information about your server in the bottom window pane, including an IP and web address you can use to visit your server. The web address will look something along the lines of “ec2-18-134-59-252.eu-west-2.compute.amazonaws.com”.
Visiting your server in your browser will not return a response as it is not configured to be a web server yet, which is the next task.
3.2. SSH into your Server
To remotely login to your server (which is located in London) and configure the instance, you are going to need to use the command line and SSH. SSH is natively setup on unix based operating systems and also Git Bash. (You may also use PuTTY on Windows to establish SSH connections, however we won’t be covering how to use this tool.)
To SSH into the server, first open Git Bash on your computer and cd
into the directory where you saved your ‘Key pair’ .pem file (which you downloaded after launching your instance in the AWS console). We need to change the permissions of the .pem key before we can actually make use of it, so run the following command:
Replacing key_file.pem
with the name of your key pair file. The above command will change the permission level of the key to 400, which is a numeric notation of a unix permission level. There are many other unix permission levels, such as 777, 755, 644, 444 and etc. For a better understanding of this, please see the "Notation of traditional Unix permissions" section under "File-system permissions" on Wikipedia.
We can now use our key to login to our EC2 server, using the following command:
In the above command, replace <ip_address>
with your EC2 instance’s IP address and the <key_file.pem>
with the name of your key file. In both cases also remove the equality signs, so it should look something like:
On your first login to the server you may be asked to approve/authorise the connection to the server from your CLI, in such case just type y
and enter. You will now be logged into your EC2 server and you must use unix commands to manage the instance, which are effectively the same commands we’ve been using locally to develop with.
First point of action after logging in, run the following command:
Which will switch your user account to a “superuser,” in other words, you’ll be provided full root access to edit core files on the server.
When you want to logout of the superuser, enter the following into the CLI:
The above will exit you out of the superuser account and back into your normal account. Run the command again and you will be logged out of the EC2 instance and back onto your local machine’s terminal.
3.3. Setting up your Web Server
Now at this stage I’d like you to configure your web server. I’d like you to perform your own research on what you’ll need to install and how to configure the server. The reason for this being is that I could give you a list of all the commands you need to run to make it a web-server, but you’ll only obtain minimal experience in the setup process if I give you all the instructions.
To set you in the right direction, you’re going to need to install:
- PHP (+ PHP modules)
- Apache
- MySQL/MariaDB Server
- Git CLI (for cloning your git repo to the server)
You will find the following resources useful, “Tutorial: Install a LAMP web server on Amazon Linux 2” and “Tutorial: Configure SSL/TLS on Amazon Linux 2”.
To install packages we can use the command yum install <package_name>
on RedHat, CentOS and Amazon Linux 2 (which is based on RHEL/CentOS). Other variations of Linux OS such as Debian and Fedora offer apt-get
but RHEL offers this too along with yum
, hence personally I prefer to work with RHEL/CentOS.
3.4. Configuring Virtual Hosts
By default apache will serve any content stored in /var/www/html
for your website. You can opt to use this however in production we should really set our own Virtual Hosts (or vhosts), which will define the domains, TLS certificates, the document root and where any error logs should be stored.
Any changes to your vhosts will require you to restart apache (not the server). Depending on your server setup, you should be able to restart apache by running the following command:
The command may vary depending on the way you setup apache on your server.
You should store all your vhosts in the /etc/httpd/conf.d
directory and you should create a new vhost for each domain you host. For example, for a site called “AutoParts” you may appropriately create a vhost file called “autoparts.conf” and the contents of the file may look something like:
The above vhost serves the domain name “autoparts.com” and loads content saved in the “/var/www/autoparts” directory.
3.5. Deploying your Site
Once you have provisioned the server and made it able to serve content over the web, I’d like you now to finally deploy your site onto your server. You can do this by cloning your git repository onto your server directly. I’d also like you to make use of the database we setup on your server and connect it to your developed website.
You can also remotely connect into your MySQL/MariaDB server using a database client, such as HeidiSQL as we have opened the MySQL port in the AWS security group. Your MySQL installation may have the default credentials which is unsecure, therefore as we are only experimenting with AWS purely for education, it is worth terminating your instance after we're complete (or you may change your security group/config to secure your server if you want to keep it online).