IntroductionWelcome| 00:04 | Hi! I am Jon Peck and welcome to Up
and Running with Amazon Web Services.
| | 00:08 | In this course, we'll explore how Amazon Web
Services can be used to build applications.
| | 00:14 | I'll start with an explanation of what cloud
computing is, then go over the many products
| | 00:18 | and services within AWS, including Amazon
EC2 and the Simple Notification Service.
| | 00:25 | Throughout the course we'll assemble the services necessary
to power a photo watermarking application in the cloud.
| | 00:32 | We'll cover foundational and application
platform services along with many other products that
| | 00:36 | will help you get rid of the overhead of
setting up and managing cloud services that are at
| | 00:40 | the heart of many applications.
| | 00:42 | Let's get started!
| | Collapse this transcript |
| What you should know| 00:00 | The first chapter of this course describes
what cloud services are then goes into a survey
| | 00:04 | of what the various
Amazon Web Services can do.
| | 00:08 | For broader overview of what cloud computing is,
I recommend watching Cloud Computing First
| | 00:13 | Look with David Rivers, here in the
lynda.com online training library.
| | 00:19 | The second and third chapters get a bit deeper
with practical demonstrations of how to set
| | 00:23 | up a web application
using Amazon Web Services.
| | 00:26 | This course doesn't teach programming or
system administration, but if you have a general
| | 00:30 | understanding of some of the high-level
concepts, it'll be especially helpful.
| | 00:34 | I'll be demonstrating some commands on a Linux
server, but as the emphasis is on the ecosystem
| | 00:39 | of Amazon Web Services and not on systems
administration, I'll describe what the commands
| | 00:43 | are doing but not describe
in depth how they work.
| | 00:47 | If you need a more in-depth tutorial on web
server administration I suggest Up and Running
| | 00:52 | with Linux with PHP Developers here in
the lynda.com online training library.
| | 00:57 | With the second and third chapter you're going to need
an SSH client for remotely connecting to servers.
| | 01:03 | For Mac and Linux users, SSH is already
installed and available through the terminal.
| | 01:07 | If you are using Windows, the free program PuTTY
can be used to connect, which is available
| | 01:11 | from the official website at greenend.org.uk.
| | 01:16 | Now something to keep in mind is that Amazon
Web Services is a commercial service that
| | 01:20 | costs money to use. You're going to need a
credit card to sign up and they're going to
| | 01:24 | verify your contact information.
| | 01:27 | With that said, there's a free level for new
customers that have a reasonable threshold
| | 01:30 | that is good for this kind of experimentation.
| | 01:34 | This course is designed to consume resources
within that free level, but ultimately you
| | 01:38 | are the one who's responsible for
managing services within your own AWS account.
| | Collapse this transcript |
| Using the exercise files and assembling an image watermarking application| 00:00 | In this course, I'll be describing and demoing
how Amazon Web Services can be used to provide
| | 00:05 | services within an image
watermarking application.
| | 00:09 | As this is not a programming course, I have
written the entire application ahead of time
| | 00:13 | in order to demonstrate the
interaction of these services.
| | 00:17 | You're going to need a little bit of configuration
over a couple of files which I'll walk-through
| | 00:21 | step-by-step as I demonstrate
the corresponding service.
| | 00:24 | Don't worry. You'll get a chance to get your hands
dirty when focusing on the AWS management interface.
| | 00:30 | The image watermarking application is very
basic, but it's functional. From a high level
| | 00:35 | the workflow with the application is:
| | 00:37 | the user uploads an image to
the hosted virtual server.
| | 00:41 | The program will validate the uploaded file
to ensure that it's an image, and if it's not,
| | 00:45 | send a notification.
| | 00:47 | If it is valid, I'm going to store the image
using Amazon Web Services Storage and save
| | 00:52 | a record of the image metadata, such as
the height and width using a database.
| | 00:57 | After the image has been uploaded, we'll
process the image which will add a watermark. This
| | 01:02 | includes updating the stored image that has
the watermark and updating the database to
| | 01:06 | indicate that the image has been watermarked.
| | 01:09 | Finally, we'll show all the watermarked images
that were listed as watermarked in the database.
| | 01:15 | The exercise files for this course include
the source code of the image watermarking
| | 01:18 | application, which I wrote in PHP.
| | 01:21 | There are also two files for testing which
are an image which will validate and a text
| | 01:26 | file which obviously won't.
| | 01:28 | Without further ado, let's
introduce Amazon Web Services.
| | Collapse this transcript |
|
|
1. Introduction to Amazon Web Services (AWS)Cloud computing: Beyond the buzzwords| 00:00 | In this chapter I'm going to introduce cloud
computing in a practical way, including key
| | 00:05 | terms and decrypting some of the acronyms in order
to describe what exactly Amazon Web Services is.
| | 00:11 | I'll then go into a high-level survey of the
three major tiers of Amazon Web Services,
| | 00:15 | which are foundation, application
platform and management and administration.
| | 00:21 | I'll go through options for sourcing
applications, then give a brief overview of the history
| | 00:25 | of Amazon Web Services to give
context about how it's grown.
| | 00:29 | Finally, I'll combine physical with the
virtual and discuss why location is important.
| | 00:36 | Cloud Computing is one of those things that
has been mentioned in shiny literature repeated
| | 00:39 | at conferences and used as a
universal solution to all your problems.
| | 00:44 | Is your system too slow? Cloud computing.
| | 00:46 | Is coffee not strong enough? Cloud computing.
| | 00:50 | Before I go any further, I'm going to define
exactly what cloud computing is and describe
| | 00:55 | how it can be used.
| | 00:57 | Cloud computing refers to the use of a service
from hardware and software resources that's
| | 01:02 | delivered by a network, which
in most cases is the Internet.
| | 01:06 | The hardware resources are the physical
servers and infrastructure, which includes storage,
| | 01:11 | cooling and networking.
| | 01:12 | The software resources provide the services
themselves which is the consumable product
| | 01:17 | that comes in many different
forms that I'll discuss in a moment.
| | 01:21 | Cloud computing typically has
a number of characteristics.
| | 01:25 | First is the delegation of
physical and management overhead.
| | 01:29 | With cloud computing you can select a service
to use. You don't have to purchase, set up, or
| | 01:33 | maintain the servers and infrastructure. You have
outsourced this to the cloud service provider
| | 01:37 | who has already done this ahead of time.
| | 01:39 | Now cloud computing is both modular and
compartmentalized, which allows for a system to be built out
| | 01:45 | of smaller distinct and
interchangeable components that work together.
| | 01:49 | As a result, these services are highly elastic,
which allows for dynamic allocation of resources;
| | 01:55 | growing and shrinking as needed.
| | 01:57 | An example of elasticity is providing more
databases during peak demand than removing
| | 02:02 | them as traffic dies down.
| | 02:04 | Additionally, the modularity provides a resilient
infrastructure in case of an individual component failure.
| | 02:11 | In those situations a replacement is
immediately available, which reduces overall downtime.
| | 02:16 | Finally, between the elasticity and the
resilience, there is an illusion of an infinite supply
| | 02:22 | of resources available.
| | 02:24 | There is of course practical limitations, but
for all intents and purposes, there is no limit.
| | 02:29 | Keeping these characteristics in mind, cloud
computing can easily be compared to an electrical
| | 02:33 | utility, where the service, which is
electricity, is delivered via network or the power grid,
| | 02:39 | and the responsibilities are delegated to the
utility, including infrastructure, electricity
| | 02:44 | production and maintenance.
| | 02:46 | Now there are many different types of cloud
computing service models, which each of them
| | 02:49 | have been designated with a different
acronym, which can also used as a buzzword.
| | 02:53 | I'll focus on the four primary service models
that have been recognized by the International
| | 02:57 | Telecommunications Union, each of which, which is
found in some form within Amazon Web Services.
| | 03:03 | To help remember these
four, I use the word SNIP.
| | 03:07 | The first letter S is for Software as a Service
or SaaS, which refers to application software
| | 03:13 | installed and operated by cloud
providers and used by clients.
| | 03:18 | The providers manage the infrastructure and
platform and the clients just use the software.
| | 03:22 | For example, both the Google apps and
iCloud from Apple are Software as a Service.
| | 03:28 | The next letter N it is for Network as a Service
or NaaS, where network and transport capabilities
| | 03:34 | are provided, which is traditionally as a
virtual private network and bandwidth.
| | 03:38 | This is introduced as a distinct service
model in 2012, but it's generally not needed by
| | 03:43 | basic cloud users.
| | 03:45 | The third letter I is for Infrastructure as
a Service or IaaS and it's the most basic
| | 03:51 | model where a computer itself is the service.
| | 03:55 | IaaS typically provides a hosted virtualized
machine where a single physical machine emulates
| | 04:00 | multiple computing environments that
behave like individual computers.
| | 04:04 | In short, it means to get to use an entire
computer's resources without needing a physical
| | 04:09 | computer sitting right there on
your desk or in a rack somewhere.
| | 04:12 | A VPS or Virtual Private Server
is a common example of IaaS.
| | 04:17 | And finally P, for Platform as a Service or
PaaS, which is a complete framework that can
| | 04:23 | be used to host
applications written by clients.
| | 04:27 | The platform is typically a web server
solution stack, including the operating system, web,
| | 04:32 | and database server and programming language
execution environments where you can now run
| | 04:37 | something that has been
written in a particular language.
| | 04:39 | And Infrastructure as a Service can be
configured and packaged as a platform.
| | 04:44 | An example of this is PHP Application Server.
| | 04:49 | To review: by a remotely delivering services
and hiding the complexity and management of
| | 04:54 | the hardware and software resources, cloud
computing can be used to reduce costs and
| | 04:58 | overhead. This allows clients to focus on
developing core products and services, rather
| | 05:04 | than dealing with managing resources.
| | 05:06 | Now something to be aware of when you're
outsourcing the hosting and maintenance of your stuff,
| | 05:11 | you're entrusting the cloud service provider
with your user data, software, and so forth.
| | 05:17 | With that said, cloud service providers such as
Amazon, have become a ubiquitous and trusted
| | 05:21 | mechanism for securely delivering quality.
| | 05:24 | So be sure to evaluate your own security and
privacy needs before making a decision about
| | 05:29 | whether or not cloud services are appropriate.
| | 05:32 | As a former systems administrator, I can
distinctly feel the emotional chill of the early-morning
| | 05:37 | notification of a failed server that needed
maintenance, which depending on the severity
| | 05:42 | of the problem, could mean that I was in a drive
out to the datacenter in the middle of night.
| | 05:46 | Being able to delegate that kind of responsibility
to a service and being able to automatically
| | 05:51 | deploy replacement, that's the kind of
thing that makes cloud computing desirable.
| | 05:55 | I have gone over these high-level concepts
and buzzwords and definitions of just what
| | 06:00 | cloud computing is in order to give you a
foundation of understanding that will help
| | 06:04 | demystify Amazon Web Services.
| | Collapse this transcript |
| What is AWS?| 00:00 | Amazon Web Services is a cloud computing
platform that consists of a collection of web services
| | 00:05 | that have been provided by Amazon.com.
| | 00:09 | There are several dozen distinct services
available in various stages of release. Some
| | 00:13 | are fully released with service level agreements,
some of them are in open beta and others are
| | 00:18 | announced, but are not fully
available to the general public.
| | 00:21 | In this course, I'll focus on services that
have been released and are available for use.
| | 00:26 | Throughout the Amazon Web Services Home pages
there are different charts and lists of services
| | 00:31 | and some of them actually kind of conflict
with one another and aren't completely aligned.
| | 00:35 | I preferred this particular chart found
within the AWS documentation, because it's clear
| | 00:40 | and easy to read.
| | 00:42 | At first glance there's a lot to absorb, so
I'll walk through each product here with a
| | 00:45 | practical high-level and
highlight important services.
| | 00:49 | I'll go into greater detail about a number
of these services and demonstrate some of
| | 00:53 | the basic ones as part of the image watermarking
application I'll be assembling later in the course.
| | 00:59 | Starting at the bottom, the proverbial backbone of all
Amazon Web Services is the Global Infrastructure,
| | 01:04 | which refers the worldwide
geographical distribution of all their systems.
| | 01:10 | There are several hundred Availability Zones,
which can be practically thought of as a datacenter.
| | 01:15 | It's a little bit more complex than that, but
I'll give greater detail in a moment.
| | 01:20 | Amazon Web Services has nine distinct
geographic regions where their servers are hosted.
| | 01:25 | Northern Virginia, which is the default, Oregon,
California, Ireland, Singapore, Tokyo, Sydney,
| | 01:32 | Sao Paulo and AWS
GovCloud for the US government.
| | 01:37 | In addition to those regions, there are
roughly 40 edge locations, which are places where
| | 01:41 | the small and large objects are served from
as part of their content delivering network.
| | 01:46 | In general, it's best to locate services close
to where the primary users are, in order to
| | 01:51 | optimize performance and availability. I'll discuss
locations in greater depth in an upcoming segment.
| | Collapse this transcript |
| Exploring the foundation services| 00:00 | The next tier contains Foundation Services.
| | 00:03 | These services are in four categories:
Compute, Storage, Database and Networking.
| | 00:10 | There are two Compute Services.
| | 00:12 | The first, Amazon Elastic Compute Cloud, or
EC2, are basically virtualized computers, also
| | 00:18 | known as Virtual Private Servers.
| | 00:21 | EC2 is a textbook example of an
Infrastructure as a Service.
| | 00:25 | I'll be using EC2 to host
the watermarking application.
| | 00:28 | An optional companion service called Auto
Scaling allows EC2 instances to be dynamically
| | 00:34 | added or removed in response
monitored resource utilization.
| | 00:38 | This allows the amount of resources that your
application uses to be scaled up or down based on demand.
| | 00:45 | The next category, Storage, has a number of
services that leverage Amazon's distributed network.
| | 00:51 | First is the Amazon
Simple Storage Service or S3.
| | 00:54 | In short, it can be used to store and serve
any data in any format in just about any size,
| | 01:01 | anywhere from just 1 byte to 5 TB.
| | 01:04 | S3 is typically used for images, stylesheets, non
executable files and other types of static content.
| | 01:11 | It can also be used for
archives and file storage.
| | 01:14 | The Simple Storage Service
does not need EC2 to function.
| | 01:18 | I'm going to be using S3 to store
images in the watermarking application.
| | 01:23 | In contrast, the Amazon Elastic Block Store,
or EBS, provides persistent storage for EC2
| | 01:29 | instances which allows application storage
to be separated from the virtual machine.
| | 01:34 | This is useful if any EC2 instance
experiences a failure, goes down.
| | 01:39 | The Elastic Block Store can be moved to
a different instance to resume service.
| | 01:43 | EBS is typically used for
databases in file systems.
| | 01:47 | Finally, the AWS Storage Gateway is a
service that connects local file servers, such as
| | 01:53 | a Network Attached Storage, Direct Attached
Storage or Storage Area Network
| | 01:56 | to store encrypted files using the S3 service.
| | 02:00 | This provides a secured mechanism for
scalable off-site storage and backups.
| | 02:05 | Next, let's check out the Database Category.
| | 02:09 | The first service, Relational Database Service
is a scalable database featuring automatic
| | 02:14 | patches and backups.
| | 02:16 | It can be used as compatible replacement for
relational databases like MySQL, Oracle or
| | 02:21 | Microsoft SQL Server.
| | 02:23 | If there's no need for a relational database,
AWS has the DynamoDB service, a scalable NoSQL
| | 02:30 | solution that Amazon claims is the
fastest-growing new service in AWS history.
| | 02:35 | NoSQL is not a relational database format,
which means it doesn't support table joins.
| | 02:41 | The advantage is that it's very, very
fast distributed and highly redundant.
| | 02:46 | Use cases for a DynamoDB include
user messages and image metadata.
| | 02:51 | Similar to DynamoDB, the Amazon SimpleDB
service is also a managed NoSQL service.
| | 02:57 | But it's scaled back and more
appropriate for smaller datasets.
| | 03:01 | I'll use simple DB as the database
for the watermarking application.
| | 03:05 | The last service listed by Amazon in the
Database category is Amazon ElastiCache which provides
| | 03:10 | scalable in-memory caching, as
opposed to disk-based caching.
| | 03:14 | It's protocol-compliant with memcached,
meaning, it can be basically dropped in place.
| | 03:19 | One would use Amazon ElastiCache for
things like caching database lookups.
| | 03:25 | The final category in the Foundation
Services is Networking, which provides a number of
| | 03:29 | low-level utilities.
| | 03:31 | The first, Amazon Virtual Private Cloud, or
VPC, allows AWS services to be launched in
| | 03:37 | a virtual network, similar to
managing a private data center.
| | 03:41 | The VPC supports both private and public subnets,
which is useful for setting up services private
| | 03:46 | to an organization or just
protecting a public web server.
| | 03:49 | A hardware VPN service is also available.
| | 03:53 | The next service, Elastic Load Balancing, works in
conjunction with a monitoring service CloudWatch
| | 03:58 | to distribute traffic across EC2 server
instances based on configurable metrics.
| | 04:04 | I'll get into CloudWatch shortly.
| | 04:06 | Elastic Load Balancing can also provide fault
tolerance, directing traffic away from failed servers.
| | 04:11 | I won't get into much greater detail about
the following advanced network services in
| | 04:15 | this course, but it's useful
to know that they're there.
| | 04:19 | Amazon Route 53 is a scalable Domain Name
System web service, providing control
| | 04:24 | of domains and subdomains.
| | 04:26 | Route 53 is heavily distributed across the
global infrastructure network to maximize
| | 04:30 | geographical proximity to
end users and lower latency.
| | 04:34 | Finally, Amazon Direct Connect, which
provides the opportunity to get a dedicated network
| | 04:39 | connection to AWS within some datacenters,
which is useful for moving very large amounts
| | 04:44 | of data very quickly.
| | 04:46 | That's the end of the foundational services.
| | 04:47 | I understand that that's a
lot to absorb in one sitting,
| | 04:50 | so I will be discussing in demonstrating a
number of them individually in upcoming chapters.
| | Collapse this transcript |
| Reviewing the components within application platform services| 00:00 | The middle tier contains the Application
Platform Services which perform specific functions
| | 00:05 | as part of a greater application.
| | 00:07 | The first category is Content Distribution,
which has just one service, Amazon CloudFront.
| | 00:13 | Amazon CloudFront is a Content Delivery Network
for files of any size ranging from tiny
| | 00:18 | files, like stylesheets and images, to large files
like installers or large media, like movies or audio.
| | 00:24 | Unlike S3, CloudFront serves content from
geographically distributed edge locations
| | 00:28 | to deliver content from locations closer
to users which increases performance.
| | 00:34 | CloudFront supports both origin-pull and push
mechanisms, meaning it can serve content found
| | 00:39 | on an existing web server or files
can be uploaded to it to serve.
| | 00:43 | CloudFront supports both static unchanging
content like logo images and dynamic content
| | 00:48 | driven by a database that changes on
regular intervals, like news or a blog.
| | 00:53 | The next category, Messaging
contains three acronym filled services.
| | 00:58 | The Amazon Simple Notification Service or
SNS can be used to push messages via a number
| | 01:04 | of protocols, including HTTP, email and SMS.
| | 01:08 | However, the service requires
verification from the recipient.
| | 01:12 | So it's good for event-driven internal notifications
like warnings, service notifications and status updates.
| | 01:18 | Due to the verification step, it's more
cumbersome to use to send messages to servers.
| | 01:23 | So there is another service
that is tailored for that.
| | 01:26 | I'm going to use SNS to send admin
alerts in the watermarking application.
| | 01:31 | The Amazon Simple Queue Service, or SQS, provides
a mechanism for automating workflow messages
| | 01:37 | between computers.
| | 01:38 | These messages are simple and small.
| | 01:41 | There's a maximum for each message of 64 KB.
| | 01:44 | The messages are sent, received,
and deleted in batches of 10.
| | 01:48 | I'm going to use SQS to manage
the image watermarking workflow.
| | 01:52 | Neither SQS or SNS is good for messaging groups of
users which brings me to the last messaging service.
| | 01:59 | The Amazon Simple Email Service, or SES, provides
bulk transactional emails such as notifications
| | 02:06 | to users upon events.
| | 02:07 | It can also be used for newsletters
and other sorts of large mailings.
| | 02:12 | SES also provides support for DomainKeys
Identified Mail in association with the domain
| | 02:17 | to improve deliverability and to fight spam.
| | 02:20 | Email can be sent via SMTP for an easy
replacement solution or using an application programming
| | 02:26 | interface or API.
| | 02:28 | The third category, Search, has one service,
Amazon CloudSearch, which depending on the
| | 02:34 | graph that you look at
isn't available, but it is.
| | 02:38 | Amazon CloudSearch is a stand-alone
autoscaling fully managed search platform.
| | 02:42 | Boasting near real-time indexing, CloudSearch
is intended to be an easier to implement,
| | 02:47 | maintain, and scale solution than stand
-alone solutions like Apache Solar.
| | 02:52 | Need to do parallel processing? The
Distributed Computing category has a couple of powerful
| | 02:57 | solutions which I'll
cover in a high level now.
| | 02:59 | But in general, there are more advanced
topics that I won't get into in this course.
| | 03:03 | Amazon Elastic MapReduce, or EMR, is a hosted
Apache Hadoop open source framework for data
| | 03:09 | intensive application for clustered hardware
running on EC2 NS3. While the Name MapReduce
| | 03:16 | can evoke geospatial
analysis in actual mapping,
| | 03:19 | it's really something completely different.
| | 03:21 | In short, Elastic refers to the ability to
scale up and down, and mapping breaks data
| | 03:26 | into smaller chunks.
| | 03:28 | The chunks are processed in parallel then
recombined, or reduced, into a final product
| | 03:34 | that can be downloaded.
| | 03:37 | Elastic MapReduce is designed for working with
huge datasets spanning gigabytes or terabytes.
| | 03:42 | AWS also provides options for Workflow Management as well,
starting with the Amazon Simple Workflow Service or SWF.
| | 03:50 | Despite the name, it's not that simple.
| | 03:53 | Acting as a coordination hub for applications,
SWF maintains an application state between
| | 03:58 | its pieces and components.
| | 04:00 | Each workflow execution is tracked and the
progress is logged and tasks are assigned
| | 04:05 | to dispatch to a particular host.
| | 04:07 | SWF is useful for complex applications with
multiple steps in a distributed workflow.
| | 04:13 | The final components of the Application Platform
Services are the Libraries & Software Development Kits.
| | 04:19 | These interact with Amazon Web
Services, but aren't services themselves.
| | 04:23 | AWS provides Libraries & SDKs
including Java, PHP, Python, Ruby and .NET.
| | 04:31 | While this is not a programming course, I
will be demonstrating small amounts of code.
| | 04:36 | I've selected PHP for examples, because of
its broad user database and readability,
| | 04:40 | and I'll provide all the code in
order to focus on Amazon Web Services.
| | Collapse this transcript |
| Exploring tools for management and administration| 00:00 | The highest tier, Management & Administration, is less
about services and more about controlling services.
| | 00:07 | The first category, Web Interface
contains just one item, the Management Console.
| | 00:13 | The Management Console is the primary
consolidated web interface for managing AWS services.
| | 00:19 | I'll be using this interface
extensively throughout this course.
| | 00:23 | In addition to the Web Interface, there's
also a native mobile application called AWS
| | 00:27 | Console for Android and a series of stand-alone
Commands Line Tools that provide alternative
| | 00:31 | and sometimes more direct
management than the Web Interface.
| | 00:35 | The next category, Identity & Access
contains a hybrid of services and features.
| | 00:40 | I won't be demonstrating items in this category
as Identity Management is a topic onto itself
| | 00:45 | and Billing is self-explanatory.
| | 00:47 | I'll start with the AWS Identity and Access
Management, or IAM, which I have to admit as a clever name.
| | 00:55 | IAM provides an identity management in access
control system for managing users and groups.
| | 01:00 | Using Identities, access controls through permissions
is given and denied for accessing AWS resources.
| | 01:07 | Identity Federation is actually a subset of
IAM allowing identities from outside resources,
| | 01:12 | such as a corporate directory to be used to control
access without the need to duplicate identities.
| | 01:18 | Consolidated Billing is less of a
service and more like a feature.
| | 01:21 | It allows multiple AWS accounts
to be billed to a central account.
| | 01:25 | This is useful in organizations who have multiple
individuals or departments with their own accounts.
| | 01:30 | Deployment & Automation provides mechanisms for
managing and scaling groups of services in bulk.
| | 01:36 | In particular, the AWS Elastic Beanstalk is a
scaling Platform as a Service that uses AWS services.
| | 01:43 | Instead of setting up a configuration manually,
I can just grab an off-the-shelf solution
| | 01:47 | stack, and Beanstalk handles provisioning, load
balancing, autoscaling and health monitoring.
| | 01:52 | Supporting applications written in .NET, PHP,
Python, Ruby and other languages, Beanstalk
| | 01:59 | packages and configures existing solutions neatly
while still providing access to tweak settings.
| | 02:04 | If the Beanstalk solutions aren't custom enough,
the AWS CloudFormation system allows custom
| | 02:09 | templates of AWS service configurations for custom
solution stacks using a scriptable interface.
| | 02:15 | The final management component is
Monitoring using Amazon CloudWatch.
| | 02:21 | CloudWatch is one of the few systems that
doesn't require configuring out of the box,
| | 02:24 | as it monitors AWS resources automatically, including
utilization, performance and operational health.
| | 02:32 | CloudWatch is not limited to AWS resources.
| | 02:34 | In particular, custom metrics can be
measured and reacted to, using Put API requests.
| | 02:40 | Individual threshold alarms can be set based on
particular metrics that can send a notification
| | 02:44 | when something goes particularly badly.
| | 02:48 | With all the recorded metrics, visual
representation with graphs and statistics is included to
| | 02:52 | allow humans like myself
to visualize the data.
| | 02:55 | Finally, Auto Scaling leverages the Cloud
Watch monitoring to allow controlled scaling
| | 03:00 | of EC2 instances based on
whatever conditions are desired.
| | Collapse this transcript |
| Exploring options for sourcing applications and AWS history| 00:00 | Up to this point, all the services and systems
in AWS have been created by Amazon with the
| | 00:05 | exception of some the open-source and
proprietary server software on the back end.
| | 00:09 | These combine to provide a mechanism for
hosting and deploying custom applications.
| | 00:13 | The top-tier of Your Applications actually
has a double meaning beyond applications
| | 00:17 | that you write and deploy.
| | 00:19 | Offering a wide variety of commercially packaged
software, the AWS marketplace gives a variety
| | 00:24 | of package solutions ranging from free to thousands
of dollars a year, but often pennies per hour.
| | 00:30 | The software is delivered in two formats.
| | 00:32 | The first, our Amazon Machine Images for
immediate deployment with billing to Amazon Web
| | 00:36 | Services, and then Software as a Service where
the seller performs the deployment and hosting
| | 00:42 | of the software, and then bills
you and collects payment directly.
| | 00:46 | I'm not going to demonstrate any software
from the AWS marketplace, but it's good to
| | 00:50 | know that it's an available option.
| | 00:53 | This survey of Amazon Web Services touched on
the majority of their services and systems,
| | 00:57 | but there wasn't always
such a wide array of options.
| | 01:00 | To give some perspective on where Amazon Web
Services started and how they've evolved,
| | 01:04 | I've put together a simple timeline of the
selection of the major events and service launches.
| | 01:09 | Amazon Web Services launched in 2002 with
the initial free version allowing third-party
| | 01:14 | sites to search and display items from Amazon.com
and to put items into shopping carts.
| | 01:19 | Not a lot of
functionality, but it served need.
| | 01:22 | In October 2004, the services expanded with
Alexa Web Information Service for web calling
| | 01:27 | information and expanded on the Amazon.com
integration, including product information,
| | 01:33 | images and reviews.
| | 01:35 | March of 2006 brought Amazon Simple Storage
Service, also knows as S3, which leverages the
| | 01:41 | same infrastructure used by Amazon.com today.
| | 01:44 | In August, Amazon launched Elastic Compute Cloud,
or EC2, their virtual machine rental service.
| | 01:50 | In December 2008, Amazon released SimpleDB, a
distributed and redundant NoSQL database system.
| | 01:58 | Expanding on that offering in October of 2009,
Amazon released Relational Database Service
| | 02:03 | as a replacement for MySQL databases.
| | 02:06 | In January of 2011, the Amazon Simple Email
Service was released facilitating bulk emailing.
| | 02:12 | What was the cumulative impact of all this?
| | 02:14 | In April 2012, a report by DeepField showed
that, 1/3 of all Internet traffic accesses
| | 02:20 | atleast one facet of
Amazon.com's web services.
| | 02:24 | That is hugely significant.
| | 02:26 | Over its history, Amazon has expanded the
suite of services available, not only to promote
| | 02:30 | its own brands, but to provide tools that
developers and system administrators can leverage
| | 02:35 | to build their own applications,
independent of Amazon.com storefronts.
| | 02:40 | As AWS expanded its services, the supporting
infrastructure grew accordingly, which has
| | 02:44 | impasse on how users can deploy services.
| | Collapse this transcript |
| Exploring physical locations within the global infrastructure| 00:00 | Amazon Web Services has its own naming
system that it uses to describe its services and
| | 00:05 | the locations are no exception.
| | 00:07 | I'm going to go over a few key terms and
give context about how they work together.
| | 00:12 | These terms primarily apply in the context
of Amazon Elastic Compute Cloud servers that
| | 00:16 | are not exclusive to EC2.
| | 00:18 | I'll start with the
smallest component then move out.
| | 00:21 | The first term is instance.
| | 00:24 | An instance is a single EC2 server.
| | 00:26 | Remember, EC2 servers are virtual private
servers, which means several EC2 instances
| | 00:31 | may reside on the same physical server.
| | 00:35 | Instances reside in availability
zones, sometimes abbreviated as AZ.
| | 00:40 | Availability zones are in distinct physical
locations which may have multiple datacenters.
| | 00:45 | Two different users within the same availability
zone may actually be using resources across
| | 00:49 | multiple datacenters.
| | 00:51 | With that said, the important thing is that
the availability zone contains instances and
| | 00:56 | that they should be
treated as the same location.
| | 00:59 | Moving back a little bit, a region
is a large distinct geographic area.
| | 01:05 | Currently, there are 9 regions
available across most continents.
| | 01:09 | Each region contains several availability
zones which are tightly networked together,
| | 01:14 | but still engineered to
isolate failure from one another.
| | 01:17 | The advantage of this is speed and interconnectivity,
but the disadvantage is that despite their
| | 01:21 | isolation, an event in an availability zone can
affect other availability zones in the region.
| | 01:27 | To mitigate this, regions are isolated
from one another to a certain extent.
| | 01:32 | Regions are still connected to
each other, but not as directly.
| | 01:35 | This provides fault tolerance, improved
stability, and serves to contain issues within a region.
| | 01:41 | When selecting a region, one of
the primary factors is proximity.
| | 01:44 | It is best to choose a region that is
closer to both you and your users.
| | 01:48 | Now any user can use any region, but the further
away it is, the greater number of connections
| | 01:53 | hops and so forth, the data has to travel
across, and each step introduces some delay.
| | 01:59 | The less work it takes to send data
back and forth, results in greater speed.
| | 02:03 | Like talking to someone in a room,
it's easier if they are next to you.
| | 02:07 | Additionally, sometimes there are regional
requirements, such as in Europe, where there're
| | 02:11 | regulations about where
servers can be physically located.
| | 02:14 | While Amazon Web Services has a very good
track record for stability and availability,
| | 02:18 | problems do occur for a multitude of reasons.
| | 02:22 | Over the past couple of years, a number of high-profile
events have occurred that affected entire regions.
| | 02:27 | The following are exceptions, but I'm
mentioning them in the context of why region separation
| | 02:31 | and location is important, as these events
affected single regions, but not the entire network.
| | 02:37 | In April 2011, a malfunctioning of Elastic
Block Store in a single availability zone
| | 02:42 | ended up affecting the entire region,
including a Relational Database Service.
| | 02:47 | In June of 2012, a major electrical storm
affected data centers in the Eastern US knocking
| | 02:52 | out almost the entire region.
| | 02:54 | In October, a bug caused Elastic Block Store to
get stuck becoming unable to process requests,
| | 02:59 | which had a rippling effect
across multiple servers.
| | 03:02 | Then in December, a data state error caused
an Elastic Load Balancing Service event.
| | 03:07 | In each of these cases, the issues impacted
large numbers of customers across multiple
| | 03:12 | availability zones, but we're still
contained within a particular region.
| | 03:16 | While these events are isolated, an AWS has been
architected for high-availability, problems do happen.
| | 03:22 | So what should be done to mitigate them while
dealing with multiple instances? Well, keeping
| | 03:26 | all instances in a single availability zone
will result in a significant impact of that
| | 03:32 | availability zone failed.
| | 03:33 | Instead, by diversifying the locations of
instances across multiple availability zones,
| | 03:38 | the system will become more fault-tolerant.
| | 03:41 | On the other hand, the configuration would
potentially more complex, introducing additional
| | 03:46 | latency or slowness when communicating
between availability zones that wasn't perceptible
| | 03:50 | within a single AZ.
| | 03:52 | Additionally, some services are
unavailable across availability zones.
| | 03:57 | Research the capabilities of the services
and also how much work you want to do before
| | 04:01 | determining how best to scale up.
| | 04:04 | In larger architectures, it's possible to
create high-availability systems that can
| | 04:07 | operate across regions.
| | 04:10 | There are a number of things to consider in
these circumstances, including synchronization
| | 04:14 | which isn't always possible across regions,
including the Relational Database Service.
| | 04:19 | So other database solutions like MySQL
Replication should be used instead.
| | 04:22 | Where the greater geographic distance comes,
higher latency or slowness comes as well.
| | 04:28 | Each region is geographically distinct, so
be aware of the regional data regulations
| | 04:32 | as it may be legal to store some kinds of
customer data in one region, but not another.
| | 04:37 | These are large-scale architectural
issues that have known and working solutions.
| | 04:41 | So keep these considerations in mind
if the need to scale like this arises.
| | 04:46 | To review some key points about location,
keeping close proximity improves speed.
| | 04:51 | Diversifying multiple instances across
multiple availability zones reduces risk overall.
| | 04:57 | Mistakes do happen.
| | 04:58 | Be it a bug, operator error, or
someone digging through a fiber backbone.
| | 05:02 | No application and system is perfect.
| | 05:05 | Natural events can and will occur that will
impact infrastructure in unforeseen ways.
| | 05:10 | Therefore, even though it might seem impossible,
always plan on a black swan event where there
| | 05:15 | are major surprise events
that are obvious in hindsight.
| | 05:18 | For example, should you keep backups in the
same availability zone? It seems ironic to
| | 05:23 | plan for something that can't be seen, but
there are enough possibilities that can be
| | 05:26 | anticipated safely.
| | 05:28 | Throughout this chapter, I've been providing
the context that you'll need to understand
| | 05:31 | Amazon Web Services.
| | 05:33 | First, I discussed what cloud computing was,
why it is desirable, and how it can be used.
| | 05:38 | Then, I described at a high level,
what Amazon Web Services really is.
| | 05:43 | I explored the foundation services, including
Elastic Compute Cloud and database options.
| | 05:48 | Next, I surveyed components within application
platform services, such as the messaging and
| | 05:53 | workflow solutions.
| | 05:55 | No system would be complete without tools
for managing and administering the services,
| | 05:59 | including Identity Management
and the Management Console.
| | 06:02 | I introduced options for commercially sourced
applications, then reviewed the history of
| | 06:06 | Amazon Web Services to give
perspective on how it has evolved.
| | 06:09 | And finally, I described some of the key terms
and relations of the AWS Global Infrastructure.
| | 06:16 | With this context of both what comprises
Amazon Web Services and how it's structured, I can
| | 06:20 | now sign up and start building the
foundation for the watermarking application.
| | Collapse this transcript |
|
|
2. Instantiating and Configuring an EC2 ServerSigning up for AWS| 00:00 | Now that you have a high-level overview of Amazon
Web Services, it's time to put it to good use.
| | 00:05 | In this chapter, I'm going to walk through
signing up for Amazon Web Services, then explore
| | 00:10 | how to launch and manage an Amazon Elastic
Computer Compute Cloud Server which will host
| | 00:14 | the watermarking application.
| | 00:16 | I'll demonstrate how to remotely connect the
EC2 server, then how to setup Amazon Linux,
| | 00:21 | including Apache installation.
| | 00:24 | To use Amazon Web Services, you'll
need to sign up for an account.
| | 00:27 | This costs no money, but does require valid
credit card for authorization and takes a
| | 00:31 | couple of minutes.
| | 00:33 | Navigate to aws.amazon.com.
| | 00:37 | If you already have an account, click on My
Account/Console and go to AWS Management Console
| | 00:42 | and skip to the next segment.
| | 00:44 | Otherwise, click on the Sign Up button.
| | 00:48 | Sign in with your existing Amazon.com account if
you have one or create a new one if necessary.
| | 00:59 | Next, payment information
needs to be provided.
| | 01:02 | There is no fee to sign up and you won't be
billed unless you use non-free services, but
| | 01:06 | they want to make sure there's a card on file
in case you do anything that has a charge.
| | 01:11 | Select your credit card type, enter the
card name, number and expiration date.
| | 01:17 | Specify your billing
address and click continue.
| | 01:20 | If you submit an address that the system
doesn't exactly recognize, it'll prompt you to use
| | 01:25 | a suggested address in place.
| | 01:29 | When you're ready, click Continue.
| | 01:33 | The third step is identity verification. An
automated system will call you and ask you
| | 01:37 | to enter or speak a four digit pin number.
| | 01:40 | Enter your phone number
then click Call Me Now.
| | 01:46 | The phone call came in seconds.
| | 01:47 | Follow the brief instructions and
speak or type the four digit pin.
| | 01:52 | When complete, the page
updates, then click Continue.
| | 01:58 | The final step is confirmation, where they test
an authorization of $1, which is not a charge.
| | 02:04 | Amazon will send an email when confirmed.
| | 02:08 | I received the email in one or two minutes.
| | 02:10 | When ready, navigate back to aws.amazon.com.
| | 02:15 | In the upper right-hand corner, go to My
Account/Console and click AWS Management Console.
| | Collapse this transcript |
| Key Amazon EC2 terminology| 00:00 | At this point you probably can't wait to get
started using Amazon Web Services and I agree.
| | 00:06 | So far it's all been
theoretical, but necessary.
| | 00:09 | I'm going to instantiate an EC2 server, but
before I do I'm going to cut through some of
| | 00:13 | the marketing speak, so the
terms and concepts make sense.
| | 00:17 | Earlier, I described an
instance as a single EC2 server.
| | 00:21 | However, it's not quite as simple as that, as
different instances have different capabilities,
| | 00:26 | which I'll get into in a moment.
| | 00:29 | There is no charge to create or destroy an
instance, which is great for experimentation.
| | 00:34 | Amazon charges by the
hour with no partial hours.
| | 00:38 | New customers are eligible for free tier for a
year on the micro, the smallest type of instance.
| | 00:44 | Instant types have predictable computing
capacity and features, such as number of cores, disk
| | 00:49 | size, and so forth.
| | 00:52 | Examples of instance types include micro
and high memory extra large instances.
| | 00:57 | The instance types are further grouped into families
with high-level names like standard and high CPU.
| | 01:04 | The standard group has so many instance types
that Amazon has broken out into generations,
| | 01:09 | with first-generation containing low cost
with good performance solutions, and second
| | 01:13 | generation with higher performance and cost.
| | 01:17 | Now there are a lot of marketing labeling
and comparing the different instance sites
| | 01:20 | can be a bit awkward.
| | 01:22 | To compensate, Amazon uses an arbitrary
measurement known as a compute unit, as a standardized
| | 01:27 | measurement of CPU capacity.
| | 01:30 | While they haven't released their exact
mechanism inspects, they claim it's been normalized
| | 01:34 | across a variety of comparable
hardware in order to make it meaningful.
| | 01:39 | One compute unit is equivalent to a 1.0-1.2
GHz 2007 Opteron or Xeon processor.
| | 01:46 | The nice thing about this measurement is
that it simplifies decisions. A bigger compute
| | 01:51 | unit is faster.
| | 01:52 | With this context, I'm ready to
launch an EC2 server instance.
| | Collapse this transcript |
| Launching an EC2 instance| 00:00 | If you're not at the AWS Management
Console, navigate to console.aws.amazon.com.
| | 00:08 | As previously mentioned, the Management Console
provides a web interface to manage Amazon Web Services.
| | 00:13 | Supporting browsers on the Desktop, Tablet,
and Mobile, this is the heart of operations.
| | 00:18 | I'm going to start by launching an EC2
instance to host my application by clicking on EC2.
| | 00:24 | By default, the EC2 dashboard starts in the US
East region as indicated in both the Service
| | 00:29 | Health in the upper right-
hand corner of the toolbar.
| | 00:33 | The circular arrow allows me to refresh status.
I'm going to create a virtual server by clicking
| | 00:39 | on Launch Instance.
| | 00:43 | This opens a window giving me three options.
| | 00:46 | The first, Classic Wizard gives me finite
control over how the instance should be configured.
| | 00:51 | I'm going to demonstrate the Classic Wizard in order
to provide a thorough explanation of what and why.
| | 00:57 | The second option, Quick Launch Wizard,
simplifies the amount of upfront configuration.
| | 01:02 | As it become more acclimated to AWS,
this maybe a more viable option.
| | 01:06 | And the final option, the AWS Marketplace, provides
a number of out-of-the-box commercial solutions.
| | 01:12 | For now, select the Classic
Wizard and click Continue.
| | 01:16 | The first step of the process allows us the
selection of Amazon Machine Images or AMI.
| | 01:21 | These provide the base operating system and
server software already configured and will
| | 01:25 | be the basis of the server.
| | 01:27 | There are four tabs; Quick Start which provides
several dozen disk images directly from Amazon,
| | 01:34 | My AMIs contained images that I've
created. Right now I don't have any.
| | 01:39 | Community AMIs contains contributed
disk images; use them at your own risk.
| | 01:44 | This interface usually takes a little bit
of time to load and doesn't offer a lot of
| | 01:48 | detail about the images.
| | 01:51 | I can filter the Community AMIs by searching
for keyword like Drupal, so I'll just type
| | 01:57 | Drupal, press Enter.
| | 02:00 | The images can also be filtered at a high
level for things like Amazon provided images,
| | 02:05 | 32-bit images, and so forth.
| | 02:08 | The final tab is the AWS Marketplace again.
Going back to Quick Start, I'm going to use
| | 02:14 | the default selection, which is the 64-bit
Amazon Linux distribution, Amazon Linux AMI.
| | 02:21 | It's preinstalled with an AWS API tools, and it's
lightweight and is both supported and maintained.
| | 02:27 | Also, this image supports the free tier, which is
great for experimentation. Click Select to continue.
| | 02:34 | The next step, instance details provides
instance configuration. I can choose the number of
| | 02:40 | instances, which I'll keep at 1.
| | 02:41 | I also have an option to select an
instance type, which I discussed earlier.
| | 02:46 | For now I'm going to stick with the free tier; the
micro instance. This is fine for experimentation.
| | 02:53 | There's also an option for Elastic Block Store
Optimized instances, which does a lot of the
| | 02:56 | heavy lifting upfront in terms of configuration.
It's not supported for every instance type,
| | 03:01 | which also excludes micro.
| | 03:04 | There are two options about how
the instances can be launched.
| | 03:07 | The default option is the regular
hourly charges with no commitments.
| | 03:12 | The second option, request spot instances,
allows instances to be created and built at
| | 03:16 | a market rate based on supply and demand.
| | 03:19 | This can save money while
performing large computational tasks.
| | 03:22 | However, as I'm just hosting a simple
application, I won't need this as it is a bit overkill,
| | 03:27 | so I'll switch back to launch instances.
| | 03:30 | In this box I have two options; the first
EC2 is the default, which is a public facing
| | 03:35 | server with no special
networking configuration.
| | 03:38 | I can choose an Availability Zone manually
or just take whatever it gives me. This is
| | 03:44 | useful if I'd like to distribute
instances across availability zones.
| | 03:48 | The second option, VPC, stands
for Virtual Private Cloud.
| | 03:53 | It shows regardless of whether I've
configured a private or public subnet.
| | 03:56 | I haven't configured anything so at this time
I can't click on it. I have no availability zone
| | 04:02 | preference, so I'll just click Continue.
| | 04:05 | The Advanced Instance Options gives me an
opportunity to tweak the configuration.
| | 04:10 | I can specify a particular Kernel and RAM Disk
providing very granular control over security
| | 04:15 | fixes and updates and tuning for specialty
applications. This can safely be left as default.
| | 04:22 | Monitoring is an up-sell. CloudWatch monitoring
is available by default, but detailed monitoring
| | 04:27 | can be added at additional charge.
| | 04:30 | User Data refers to scripts that are
executed as root user on the first boot.
| | 04:35 | Termination Protection prevents destruction
of the instance from the Console or API.
| | 04:39 | Shutdown Behavior allows the instance to be
stopped, the equivalent of turning it off,
| | 04:43 | and terminated, which is when the
instance is completely removed.
| | 04:47 | Finally, the IAM Role that can be assigned
to the instance. I haven't created any, so
| | 04:51 | there's no need to change it.
| | 04:52 | In fact, I don't need to make any
changes, so just click Continue.
| | 04:57 | Storage Device Configuration allows the addition of
volumes, such as Elastic Block Stored volumes, and so forth.
| | 05:03 | If I click Edit, I can change the size of
the root volume, toggle whether the volume
| | 05:08 | is deleted upon termination, and so forth.
Clicking on EBS Volumes I can either create
| | 05:14 | or map a public EBS volume.
| | 05:17 | Snapshot provides a gigantic list of public
snapshots, which frankly is not very usable.
| | 05:24 | The Volume Size, Device, and so
forth can be specified as well.
| | 05:28 | For now, the default configuration is fine and
no changes are needed, so just click Continue.
| | 05:33 | This next window allows me to add metadata
to tag the instance. By default there is a
| | 05:38 | name that can be associated with the instance,
and I can add up to 10 unique keys with optional
| | 05:43 | values. I'll tag it with a Name, Watermark
and Continue. So I'll click here and just
| | 05:48 | type watermark, and click Continue.
| | 05:53 | The public and private key pairs provide secured
communication to both Windows and Linux servers.
| | 05:58 | As this is a Linux server, the provider key
pair will allow me to SSH into the instance.
| | 06:03 | I don't have to create a key pair every time
I create an instance, as they can be reused,
| | 06:08 | but since this is the first time, it's required.
I'll name it ec2private, then click Create
| | 06:16 | & Download your Key Pair, which is going
to transfer a file named ec2private.pem.
| | 06:24 | The next step is to configure the firewall.
| | 06:25 | A security group is just a name for set a
firewall rules. Let's create a new security
| | 06:31 | group based on the role of the server,
which is just regular web server.
| | 06:34 | I'm not going to configure it;
SSL says to keep it simple.
| | 06:38 | I'll name the group Web server (no SSL)
and describe it as HTTP and SSH.
| | 06:47 | There is an existing rule for SSH already
written in the classless inter-domain routing
| | 06:53 | syntax, also know as CIDR.
| | 06:56 | This rule allows traffic to and
from any IP with no subnet mask.
| | 07:00 | I'm going to create a new rule for HTTP,
under Create a new rule, select HTTP.
| | 07:07 | I have the option to limit traffic with this
CIDR syntax, as this is web traffic however,
| | 07:12 | I'm just going to let anybody
access it, click Add Rule.
| | 07:16 | The TCP list is now updated with the rule on
port 80; I can now both connect the instance
| | 07:21 | for remote administration and for accessing content
served on port 80, like web pages, click Continue.
| | 07:29 | The final step, Review, allows me to
review all the preferences that I have set.
| | 07:33 | Notice that Monitoring says Disabled.
| | 07:37 | This actually means advanced monitoring; remember
it's an up-sell. Scroll down to review everything.
| | 07:43 | Looks good to me, so I'm going to
click Launch to create this instance.
| | 07:49 | As soon as I clicked Launch the
usage hours started counting.
| | 07:52 | The instance takes a couple minutes to launch.
| | 07:54 | The pop-up describes a couple of things I can
do in the meantime, such as creating additional
| | 07:58 | status check alarms for additional cost, or
creating EBS volumes, also for an additional charge.
| | 08:04 | For now, just close the window.
| | Collapse this transcript |
| Managing EC2 instances from the console| 00:00 | Let's take a look at the
newly created instance.
| | 00:03 | Click on the Instances
link on the left-hand menu.
| | 00:06 | The Instances page within the EC2 dashboard
shows that the instance named watermark type
| | 00:11 | t1.micro is currently running.
| | 00:13 | If I click on the row, more comprehensive
information will be shown at the bottom.
| | 00:19 | By default, it's moved to the bottom of the
screen, but there are three icons on the right
| | 00:23 | that allow me to resize
the instance information.
| | 00:25 | I'll click the one that's
farthest to the right.
| | 00:29 | The description is verbose and confirms that the instance
has been created with the configuration I've specified.
| | 00:34 | Let's click the tab labeled Status Checks.
| | 00:37 | There are two basic status checks: System
reachability and Instance reachability.
| | 00:42 | System reachability checks the AWS
infrastructure and Instance reachability checks to see if
| | 00:46 | the instances operating
system is accepting traffic.
| | 00:49 | I have the option to add an
alarm, but I won't do that now.
| | 00:52 | Let's click on Monitoring.
| | 00:54 | By default, CloudWatch Basic measures 10
default items, once every five minutes;
| | 01:00 | average CPU Utilization,
Disk Reads, and so forth.
| | 01:04 | Times are displayed in
Universal Time coordinated.
| | 01:07 | I also have the option to
set a different Time Range.
| | 01:10 | The default is to show
activity within the last hour.
| | 01:14 | The final button, Tags, shows the Key Name with a
Value watermark that I set during the wizard.
| | 01:20 | I'm going to click on the row with the
watermark instance, nothing happens.
| | 01:26 | Just below the toolbar, there are two
buttons, Launch Instance and Actions.
| | 01:31 | Launch Instance will
bring up the wizard again.
| | 01:34 | Next to it, Actions allows me to
perform actions onto selected instances.
| | 01:38 | Make sure that the row with
watermark is checked and click Actions.
| | 01:43 | Over a dozen Actions will be shown in three groups:
Instance Management, Actions, and CloudWatch Monitoring.
| | 01:50 | The first group, Instance Management provides
configuration, information, and other options.
| | 01:55 | The first action within Instance Management
is Connect which will provide instructions
| | 01:59 | on how to remotely connect the instance within SSH
client, and a link to a browser-based Java SSH client.
| | 02:06 | I'll demonstrate how to connect shortly.
| | 02:08 | The next action is get system log
which displays the boot messages.
| | 02:13 | This is useful if the instance isn't booting and
additional context is needed for troubleshooting.
| | 02:18 | Create Image allows the manual
creation of a duplicatable disk image of the instance.
| | 02:22 | The disk image will be stored in the Elastic Lock
Store in the proprietary Amazon machine image format.
| | 02:28 | Remember that EBS Storage comes at
additional cost before you start experimenting.
| | 02:33 | Images created this way make a good quick
and dirty backup as images in EBS can't be
| | 02:37 | directly downloaded.
| | 02:39 | They can be moved to S3 and downloaded though.
| | 02:42 | Add and Edit tags are the same key value pairs
that are found in the wizard earlier such as name.
| | 02:48 | Launch more like this opens the launch instance
wizard with the same options for the current
| | 02:51 | instance which is useful for making a
similar but, not exact copy of the instance.
| | 02:56 | Change termination projection, as seen in the
wizard, prevents the API and console over
| | 03:01 | terminating instances.
| | 03:02 | This is good for mission
critical and always on instances.
| | 03:06 | View/change user data provides an
opportunity to change the user data.
| | 03:10 | The same stuff that was in the wizard, where
you can set the values and/or a start up script.
| | 03:14 | And finally, Change shutdown behavior.
| | 03:17 | There are two options, Stop which is like
turning the server off and Terminate which
| | 03:22 | is destructive and will delete the instance.
| | 03:25 | The next group of Actions, generically
labeled as Actions, is pretty straightforward.
| | 03:29 | Terminate, deletes an instance.
| | 03:32 | Reboot, sends a reboot command
to reboot the virtual server.
| | 03:36 | And Stop/start do about what you expect.
| | 03:38 | Remember, if Stop behavior is terminate,
the instance will be deleted when stopped.
| | 03:43 | The final group of Actions, CloudWatch
Monitoring actions, are basically shortcuts.
| | 03:48 | I can enable and disable detailed
monitoring directly from this interface.
| | 03:53 | Detailed monitoring adds seven metrics at a 1
minute frequency from a couple of bucks a month.
| | 03:58 | The metrics include CPU utilization, network
in/out, disk read/write in bytes, and disk
| | 04:03 | read/write in ops.
| | 04:05 | Finally, Add and Edit alarms which will send
notifications when metrics hit levels that
| | 04:10 | you've defined.
| | 04:11 | I'll describe the
notification in a bit more detail later.
| | Collapse this transcript |
| Remotely connecting to an EC2 instance| 00:00 | Now I'll remotely connect to the instance.
| | 00:03 | Make sure the watermark
instance is highlighted.
| | 00:05 | Click Actions and go to Connect.
| | 00:07 | There are two ways to connect.
| | 00:10 | The first is with an SSH
client which I recommend.
| | 00:14 | This is available across all platforms for
Mac and Linux and SSH client should already
| | 00:19 | be installed and available from the terminal.
| | 00:21 | I'll demonstrate this first.
| | 00:24 | For Windows, the free PuTTY client is
recommended and I'll demonstrate that after we connect
| | 00:28 | from Mac and Linux.
| | 00:30 | If a terminal client is unavailable then the
browser-based Java SSH client can be used.
| | 00:35 | But I don't recommend it due to
compatibility issues and will not demonstrate it.
| | 00:40 | Click on Connect with a standalone
SSH Client for the instructions.
| | 00:44 | I'll demonstrate first with the Mac terminal
which will be very similar to the Linux terminal.
| | 00:48 | I'm going to do a one-time configuration that will
make it easier to remotely connect to AWS servers.
| | 00:54 | Open up a terminal, verify that .ssh configuration
directory exists by making the directory using -P.
| | 01:02 | This won't harm anything if it's
ready there. mkdir -p ~/.ssh.
| | 01:11 | Next, I'm going to move the downloaded
private key file to the .ssh folder.
| | 01:16 | On the Mac, it downloaded to the Downloads
directory in my Home folder. It may be a slightly
| | 01:20 | different path for you.
| | 01:22 | So mv ~/Downloads/ and then the name of
the file which is ec2private.pem ~/.ssh.
| | 01:36 | Change the permissions on the file to only
allow yourself to read the file, chmod 400
| | 01:44 | ~/.ssh and then the name of
the file, ec2private.pem.
| | 01:49 | Finally, edit the SSH configuration.
| | 01:52 | I'm going to use the Nano editor but,
there's no requirement for you to use it.
| | 01:57 | Nano -w ~/.ssh/config. I'm going to insert
a line that'll say any Amazon AWS server.
| | 02:06 | So Host *amazonaws.com.
| | 02:11 | Then, I'll specify the identity
file path to the private key file.
| | 02:16 | Space, space, IdentityFile ~/.ssh/ec2private.pem and then
finally on a new line, specify the default user.
| | 02:31 | So user will be similar to the
instructions shown on the Amazon page, ec2-user.
| | 02:39 | Press Ctrl+X to exit and Ctrl+Y to save.
| | 02:42 | Press Enter and now
we're back in the terminal.
| | 02:45 | Now you can SSH to the server without having to
specify the location of the private key file.
| | 02:50 | Take a look at the public DNS
part of the connect instructions.
| | 02:54 | This is the hostname of the server and will
be different than what is shown on my screen.
| | 02:58 | So I'm going to select and copy
that and switch back to the terminal.
| | 03:05 | Type ssh and the hostname of the server
which I'm just going to paste, press Enter.
| | 03:14 | As this is the first time connecting to the
host, I'll see a one time message asking if
| | 03:17 | I want to connect to the unknown server.
| | 03:20 | Type yes and press Enter.
| | 03:24 | This will add the server to
known hosts and I'm connected.
| | 03:27 | If you're connected as well,
skip to the next segment.
| | 03:32 | For Windows users, the
configuration is going to be different.
| | 03:35 | If you don't already have a full PuTTY suite
of software, go to the Home page and select
| | 03:40 | the PuTTY installer.
| | 03:43 | PuTTY doesn't support the key file automatically
but, it can import it which I'll demonstrate.
| | 03:49 | Start PuTTYgen, go to All Programs > PuTTY
> PuTTYgen, then click Load. Where it says
| | 03:58 | PuTTY Private Key Files switch to All Files.
| | 04:02 | Then navigate to where you
downloaded the .pem file and click Open.
| | 04:06 | So I put it in Downloads.
| | 04:08 | You'll get a notice
indicating success and click OK.
| | 04:13 | Click Save private key and say
Yes the pass phrase question.
| | 04:18 | Make sure that the filename is same as the
key pair which was ec2private and click Save.
| | 04:26 | Now that the key is being converted, I
can now configure PuTTY to connect.
| | 04:31 | Take a look at the public DNS
part of the connect instructions.
| | 04:33 | This is the Host Name of the server and will
be different than what shown on my screen.
| | 04:37 | So I'm going to copy that then open PuTTY.
| | 04:42 | First, paste the Host Name then on the left
under Category, click Connection > SSH > Auth.
| | 04:54 | At the bottom for private key file for
authentication, click Browse, go to the Downloads directory
| | 05:01 | where we have the private key.
| | 05:03 | Then click on Session, make sure that Host
Name is in there, and then we're going to
| | 05:11 | actually make a small change to that.
| | 05:14 | So move to the beginning of that line and
type a username which will be ec2-user@ and
| | 05:24 | finally under Saved Sessions,
type watermark and click Save.
| | 05:30 | Double-click on watermark, it will ask you
a question because this is the first time
| | 05:35 | that you've connected to the host, say Yes.
| | 05:40 | We've now remotely connected from Windows.
| | Collapse this transcript |
| Setting up Amazon Linux and Apache web server| 00:00 | Now that I've remotely connected to the EC2
server, I'm going to set up Amazon Linux.
| | 00:05 | Amazon Linux uses Yellowdog Updater,
Modified, known as YUM, for package management.
| | 00:10 | This is the same RPM compatible system that is used
by Red Hat enterprise Linux, Fedora, and CentOS.
| | 00:17 | Then I'm going to
demonstrate how to update the server.
| | 00:19 | Finally, I'll install Apache, the open-source
Web server, along with PHP for interpreting
| | 00:24 | and other required libraries.
| | 00:27 | First, I'm going to update the server using
the YUM Package Manager. This is a simple
| | 00:31 | one-liner; sudo yum update.
| | 00:37 | YUM will ask if the updates are okay, say yes.
| | 00:43 | A summary will be shown at the end.
| | 00:45 | Next I'm going to install some required
packages for minimal web server that has PHP support.
| | 00:50 | I'm also going to install ImageMagick
which is required for the watermarking.
| | 00:55 | Note as I'm typing this both the spelling and capitalization
of ImageMagick. So we're going to type
| | 00:59 | sudo yum install gcc make httpd php-common php-cli
php-pear php-devel git and then ImageMagick-devel.
| | 01:26 | YUM will display all the dependencies
and ask if I want to proceed, say yes.
| | 01:33 | Next, I'll update the PECL PHP Package Manager.
Don't worry about memorizing all of this. This is
| | 01:39 | just a one-time setup; sudo-
pecl channel-update pcl.php.net.
| | 01:48 | In order to allow the PHP Package Managers
to make changes to the PHP Configuration,
| | 01:53 | I'll need to specify the
location of set configuration.
| | 01:56 | So sudo pear config-set php_ini /etc/php.ini.
| | 02:05 | Now I'm going to do same for PECL; sudo-pecl
config-set php_ini /etc/php.ini. Then I'll
| | 02:16 | install ImageMagick for PHP. This will take
a minute or two; sudo pecl install imagick.
| | 02:29 | Just press Enter for the prefix.
| | 02:33 | For the purposes of demonstration and troubleshooting, I'm
going to configure PHP to display errors to the screen.
| | 02:38 | In a production environment
this would not be appropriate.
| | 02:42 | I'll start by displaying Xdebug which will
display additional debugging information in
| | 02:45 | case there's a problem; sudo-pecl install
xdebug. Then I'll need to make a small change to
| | 02:55 | the PHP configuration itself to display errors;
sudo nano -w /etc/php.ini. First make a configuration
| | 03:06 | change on the very first line of the file.
| | 03:09 | Instead of extension equals replace it with
Zend_extension equals/user/lib64/php/modules
| | 03:21 | and then xdebug.so. Press Ctrl and W, and
search for error reporting= change it to E_ALL
| | 03:36 | | (pipe) E_STRICT which is the development value
shown above, then look for display_errors, space,
| | 03:48 | equals, set to off, and I want to change that to On,
exit by pressing Ctrl+X, then Y to save.
| | 03:58 | Now that the dependencies are installed, I can
setup the Apache Web server to start automatically,
| | 04:02 | this is a one-time configuration so don't
worry about remembering this; sudo chkconfig
| | 04:09 | httpd on, then start the
service, sudo service httpd start.
| | 04:22 | Switch back to the web browser, copy the DNS from
the connect page, open a new tab, and paste it in.
| | 04:32 | You should see an Amazon Linux AMI test page,
if not, check the firewall settings using the
| | 04:37 | Management Council.
| | 04:39 | At this point I've completely prepared the
instance to start serving an application.
| | 04:44 | Throughout this chapter I provided an introduction
to the heart of Amazon Web Services, the Amazon
| | 04:49 | EC2 Cloud Servers.
| | 04:51 | I started by signing up for Amazon Web
Services which was a multi-step process.
| | 04:56 | Next, I defined Key Amazon EC2 terminology in
order to provide a foundation of understanding.
| | 05:03 | With that knowledge, I launched an EC2
instance with Amazon Linux, describing each step of
| | 05:08 | the process as I want along. I demonstrated how
to manage easy to instances from the Amazon
| | 05:13 | Web Services Console.
| | 05:15 | No sense in having a server that you can't
manage, so I remotely connected to an EC2
| | 05:19 | instance via SSH with instructions
for Mac, Linux, and Windows users.
| | 05:24 | Finally, I walked through how to set up
Amazon Linux and the Apache Web server.
| | 05:29 | Prepared with the fully functioning server in
the cloud, I can start assembling the watermarking
| | 05:33 | application using Amazon Web Services.
| | Collapse this transcript |
|
|
3. Building a Cloud Application with Platform ServicesConfiguring the software developer kit with AWS credentials| 00:00 | In this chapter, I'm going to assemble a number
of common AWS services needed to build a simple
| | 00:05 | image watermarking application.
| | 00:08 | First, I'll need to get specific credentials in
order to configure the Software Development
| | 00:11 | Kit that interacts with AWS.
| | 00:14 | Then I'll demonstrate how to
store objects in Amazon S3.
| | 00:17 | Database records will be stored in
Amazon SimpleDB for persistence.
| | 00:20 | The state of the application will be managed
in Amazon Simple Queue Service, then I'll
| | 00:26 | send an email to myself upon completion
with the Amazon Simple Notification Service.
| | 00:30 | I'll configure application
monitoring with Amazon CloudWatch.
| | 00:34 | Finally, I'll put it all together
and demonstrate how everything works.
| | 00:39 | Now that the web server is up and running,
there is one final component that's needed;
| | 00:43 | the development tools
necessary to communicate with AWS.
| | 00:47 | As mentioned before, these are in the form
of software developer kits in libraries.
| | 00:51 | I'll demonstrate the Amazon Web Services
Software Development Kit for PHP version 1.
| | 00:57 | Version 2 is out but at the time of this writing
it's so new that most services aren't supported.
| | 01:02 | Check out the SDK documentation for
an overview of what's available.
| | 01:05 | I'll install the SDK using
the PEAR package manager.
| | 01:10 | If you haven't already done so
remotely connect to the EC2 server.
| | 01:14 | First make sure that the list of available
software is up-to-date; sudo pear update channels.
| | 01:21 | Then I'm going to add the official channel for
Amazon Web Services; sudo pear channel-discover
| | 01:31 | pear.amazonwebservices.com.
| | 01:36 | Finally, install the SDK version 1.5.17.
There may be a newer version available, so yours
| | 01:42 | may be different; sudo
pear install aws/sdk-1.5.17.
| | 01:51 | The final step is to configure the SDK with
some credentials, but you're going to have
| | 01:55 | to go back to the browser, go to the Management
Console, click on your name, and go to Security
| | 02:01 | Credentials. Scrolling down, it's
going to show you an access key.
| | 02:06 | I'm going to need both the Access Key ID and
the Secret Access Key. We're going to need
| | 02:10 | both of those in just a moment.
| | 02:13 | First copy the Access key ID. I'm going to
open up the open up a text document and just
| | 02:17 | paste them in, then click Show for the
Secret Access Key, and paste it in as well.
| | 02:27 | Now that I have the necessary
credentials I can configure the SDK.
| | 02:32 | Switch back to the Terminal, and then I'm
going to need to figure out where the SDK
| | 02:36 | was installed in order to configure it.
| | 02:38 | I'll use the pear command to list
files, pear list-files aws/sdk.
| | 02:45 | Based on this list, I can see where the SDK
was installed, so I'll change directory to
| | 02:48 | that folder, cd /usr/share/pear/AWSSDKforPHP/.
I'm going to copy the sample configuration
| | 02:59 | to use it as a base; sudo cp config-sample.inc.php
to config.inc.php. Then I'll edit the configuration;
| | 03:12 | sudo nano -w config.inc.php.
| | 03:19 | I'm going to the search for 'key' and instead
of 'development-key' I'm going to take that
| | 03:26 | out, replace it with the very first one.
| | 03:33 | And for this Secret Key, I'll copy and paste
that second one, and replace the Secret Key
| | 03:39 | as well, press Ctrl+X to exit and Y to save.
| | 03:45 | As this isn't a PHP development course, I'm not
going to demonstrate how to program the application.
| | 03:49 | All the source code has been fully provided
both as free exercise files which requires
| | 03:54 | extracting and copying files between my
workstation and the EC2 server, and on a public source
| | 03:59 | code repository which makes
the installation very simple.
| | 04:02 | I recommend reading the source code even if
you don't program in PHP as I walkthrough
| | 04:07 | each step I've included, documentation
regarding how and why I'm doing everything.
| | 04:12 | In the console, change directory to
the temporary files; cd /var/tmp.
| | 04:19 | Next use git to clone the public repository;
git clone git://github.com/lyndadotcom/uarwaws
| | 04:34 | the Up And Running with
Amazon Web services.git.
| | 04:38 | Now that the files are on this server, I'll
need it to move it to the web root and I'll
| | 04:42 | need elevated privileges for this step; sudo
mv uarwaws/* /var/www/html/. Go back to the
| | 04:55 | browser and reload the test page.
| | 05:00 | Some warnings will be shown about the missing
configurations, which is normal and I'll get
| | 05:04 | to it in this chapter.
| | 05:06 | ImageMagick should indicate that it's
installed and the name of the instance DNS name and
| | 05:10 | instance type should also be displayed.
| | 05:13 | I've configured the SDK
with my AWS credentials.
| | 05:17 | If troubleshooting is needed, check the
configuration by editing config.inc.php and verify that
| | 05:23 | the correct keys are there,
both the key and the secret.
| | 05:26 | Now that I've verified that the server is ready
to go, I'm going to look at the next service
| | 05:30 | that I'm going to use in this
application, Amazon S3 for storage.
| | Collapse this transcript |
| Storing objects in Amazon S3| 00:00 | Amazon Simple Storage Service, or S3 for short,
is a bit more in-depth than the name implies.
| | 00:06 | I'll go over some of key concepts in order
to better understand the storage service.
| | 00:10 | An S3 object is a computer file,
ranging in size from nothing to 5 terabytes.
| | 00:16 | There's no limit to the number
of objects that can be stored.
| | 00:19 | Objects each have a unique key as an identifier,
which is a Unicode string less than 1024 bytes long.
| | 00:25 | Each object has a value which is the
content, what's actually going to be restored.
| | 00:30 | In addition to a value, objects also have
metadata, which I'll describe in just a moment.
| | 00:35 | Objects can also be optionally versioned,
meaning that I can have one unifying record
| | 00:39 | with different iterations of a file.
| | 00:41 | This doesn't save money, as each version is
treated like an object for billing purposes,
| | 00:45 | but it does simplify organization.
| | 00:47 | I'm going to use versioning in
the watermarking application.
| | 00:51 | Each object has metadata associated with it.
| | 00:53 | This is in every piece of metadata
available, but these are the most useful.
| | 00:57 | Starting off with a Date which is when the
object was created in S3, the Last-Modified
| | 01:02 | Date which is one it was modified in S3, Content-
Length, which is the size of the object in bytes,
| | 01:09 | Content-MD5 which is a base64 checksum of
the object value which is encoded as 128-bit
| | 01:16 | MD5, X Amazon version id which is a version
ID that was assigned by Amazon if versioning
| | 01:22 | is being used, and finally
the Amazon storage class.
| | 01:26 | There are three S3 storage classes for objects
which affect price, redundancy, and how quickly
| | 01:30 | the objects can be retrieved.
| | 01:33 | Standard is the default with practically complete
durability, meaning Amazon is protected against
| | 01:37 | bad events and has the
highest retrieval speed.
| | 01:41 | The second class, Reduced redundancy storage,
is a cheaper solution, but it's not backed
| | 01:45 | up as often and not always the fastest.
| | 01:48 | And last but not least, there is Glacier.
This is a slow solution and very cheap.
| | 01:53 | Glacier is referring to slow, but persistent.
| | 01:56 | This is good for big archives that you
can wait for hours for the transfer.
| | 02:00 | Glacier files are different though as
it's treated as a separate service from S3.
| | 02:05 | Objects need an organizational structure which
brings me to S3 buckets, which are top-level
| | 02:10 | containers for storing objects.
| | 02:12 | Unlike EC2 servers which go in a particular
availability zone, a bucket goes into regions.
| | 02:17 | The same rule of thumb applies;
keep the content close to the users.
| | 02:21 | S3 buckets have security and access control.
They're restricted by default, but can be made public.
| | 02:28 | Each AWS account can have up to a
hundred buckets at any given time.
| | 02:31 | Buckets can have versioning for
objects turned either on or off.
| | 02:35 | It can be switched on, but
can never be turned off.
| | 02:38 | Buckets cannot be transferred between accounts,
but once they're emptied, they can be deleted
| | 02:42 | and the name is reused.
| | 02:43 | Why is name reuse important? Bucket names are
unique across all of the S3, so if a bucket
| | 02:49 | name is too generic, there's a good
possibility somebody else is using it.
| | 02:53 | Bucket names need to be at least three
characters long and no longer than 63 characters. They
| | 02:58 | are alphanumeric, meaning A through Z and 0
through 9, and also includes dash and a period.
| | 03:04 | However they must start and
end with a letter and a number.
| | 03:08 | And additional restriction applies, they
cannot be formatted like an IP address.
| | 03:13 | That's a lot to absorb, so here's an example
place.to.put.my-files is okay, but if I start
| | 03:20 | it with whereisthe$20youoweme doesn't work
because it starts with a the period and has
| | 03:24 | a dollar sign in the middle of it.
| | 03:26 | With that said, in US regions there are more
relaxed naming restrictions. The names can be
| | 03:31 | up to 255 characters in length.
| | 03:33 | There is also a larger set of
characters that can be used.
| | 03:36 | Uppercase and underscores are also allowed.
| | 03:40 | With that said, this can lead to compatibility issues,
so just because you can, doesn't mean you should.
| | 03:45 | The best practice is to use the regular naming
convention even if you're using a US region.
| | 03:50 | With this context, I'm going to create a bucket
to put files in for the watermarking application.
| | 03:56 | Going back to the EC2 Management console,
I'm going to click Services in the header,
| | 04:01 | and then go to Storage and S3.
| | 04:04 | Next, click Create Bucket.
| | 04:07 | For the Bucket Name, call it something
unique such as username.test.watermark.
| | 04:13 | I'm going to use jpeck.test.watermark.
| | 04:17 | The default region is US
Standard which covers the entire US.
| | 04:20 | I have the option to create detailed
access logs to a particular bucket.
| | 04:25 | I've no need to do that,
so I'll just click Create.
| | 04:29 | There are three tabs at the top; None, which
just shows the Bucket List, Properties which
| | 04:36 | allows the adjustment of a number of options and
Transfers which is the live list of bucket traffic.
| | 04:42 | Back to the Properties tag, open Permissions.
| | 04:46 | By default, as the owner, I have permission to list,
upload and delete, view and edit permissions.
| | 04:52 | However, I want everyone to view the files
through the public application, so I'm going
| | 04:56 | to click Add more permissions.
| | 04:58 | The select list under Grantee has a number
of options. Everyone is most appropriate.
| | 05:04 | If I was using IM, there
would be more granular controls.
| | 05:08 | I want everybody to be able to view
files, but not list, edit or delete.
| | 05:12 | There is another link
here for Add bucket policy.
| | 05:16 | Bucket policies define
access to Amazon S3 resources.
| | 05:18 | They are written in JSON and offer much more
granular control, such as when you're dealing
| | 05:23 | with multiple accounts.
| | 05:25 | The last button here is
Add CORS Configuration.
| | 05:28 | CORS Configuration, or Cross Origin Resource
Sharing, allows restrictions on domains for
| | 05:33 | interacting with content.
| | 05:35 | This is useful if I want to restrict the
domains that images can be embedded on for example.
| | 05:40 | Click Save, then close Permissions
and open Static Website Hosting.
| | 05:48 | S3 allows completely static websites to be
hosted, meaning no server-side languages like
| | 05:53 | PHP or Ruby can be used.
| | 05:56 | This is great for a lightweight hosting
option for a simple website like a brochure that
| | 06:00 | has client-side interactivity or maybe some embedded
forms from other domains, such as a Google form.
| | 06:07 | Close Static Website Hosting.
| | 06:10 | Logging is the same option that was shown during
creation, so we can just skip that for now.
| | 06:18 | The next option is Notifications.
| | 06:21 | Notifications send messages when a Reduced
Redundancy Storage Object has been lost.
| | 06:26 | I'm not covering the Storage Class,
so it can be safely ignored.
| | 06:30 | Closing Notifications and going to Lifecycle.
| | 06:34 | Lifecycle allows for archiving to Amazon
Glacier on an object or sets of objects.
| | 06:39 | Lifecycle can't be used with versioning.
| | 06:41 | I'm not covering Glacier, so
this can also be safely ignored.
| | 06:46 | Close Lifecycle and open Tags.
| | 06:49 | Tags allow for clearer line items for billing
which is useful if you are determining what
| | 06:53 | is costing you money.
| | 06:56 | Closing Tags and going to Requester Pays.
| | 07:00 | Requester Pays is kind of what it sounds like,
whomever is downloading the file, pays for
| | 07:04 | the charges of the request and file transfer.
| | 07:07 | No anonymous access is allowed in
this mode because of the billing.
| | 07:11 | Finally Versioning, which I do want for the
watermarking application. Remember turning
| | 07:16 | Versioning on is a one-way operation for a
given bucket. It can't ever be turned off.
| | 07:22 | Click Enable Versioning.
| | 07:24 | It'll ask you, are you sure? And you say, yes.
| | 07:27 | There's a new option here,
Enabled and Suspended.
| | 07:31 | Suspended Versioning means that the version
identifier for any adage objects after Versioning
| | 07:35 | is suspended will be null.
| | 07:37 | This is similar, but not quite the same
as the Bucket without Versioning at all.
| | 07:41 | But it can be used as a
placeholder if you can't delete the bucket.
| | 07:45 | On the left, click the Bucket Name again.
| | 07:47 | This will bring up a file view.
| | 07:49 | Currently there aren't any files here.
| | 07:51 | Files can be uploaded in the actions, files
can also be organized into a folder within
| | 07:56 | the Bucket, but to keep things
simple I'll just ignore it for now.
| | 08:00 | Now that the Bucket is ready, I'm going to
configure the watermarking application.
| | 08:04 | Switching to the Terminal with a remote connection,
reconnecting if necessary, enter the following
| | 08:08 | command: sudo nano -w/var/www.html/config.inc.php.
Look for the definition of the bucket name
| | 08:20 | and update it with the unique
Bucket name created earlier.
| | 08:23 | I'm going to jpeck.test.watermark.
| | 08:28 | Your bucket name is going to be different.
| | 08:30 | Leave the rest of values alone for now.
| | 08:32 | I'll be configuring it piece
-by-piece as I go along.
| | 08:35 | Press Ctrl+X to Exit and then Y to Save.
| | 08:38 | Now that there's a place to put them, I'm
going to create the mechanism for uploading
| | 08:42 | images and keeping track
of them in a data store.
| | Collapse this transcript |
| Using Amazon SimpleDB to store records| 00:00 | The Image Watermarking application has very
little needs when it comes to data persistence.
| | 00:05 | As S3 already keeps track of when a file is
created, updated, and what version is on,
| | 00:10 | there is only four things that
I need to store in a database.
| | 00:13 | The file name of the image, whether the
watermark has been applied and the height and width
| | 00:18 | of the image for proper HTML sizing.
| | 00:21 | Everything can be stored as flat data, so
there is no need for relational database.
| | 00:25 | Amazon SimpleDB is a non-
relational database store.
| | 00:29 | Records are stored as key value pairs not
as tables. The structures are optimized for
| | 00:33 | retrieving and updating data, which
in turn makes it easy to scale.
| | 00:38 | Non-relational data stores are referred
to as NoSQL, also known as not only SQL.
| | 00:43 | There is a number of reasons why it makes sense for me
to use Amazon SimpleDB for this kind of application.
| | 00:49 | The data is automatically indexed as it's
added and the hardware is provisioned, the
| | 00:53 | data is replicated, and the
performance has already been tuned for me.
| | 00:56 | This means that there's very low management
overhead, which allows me to focus on developing
| | 01:00 | products, and it's fast out of
the box which is very nice.
| | 01:03 | SimpleDB's pricing structure is unique, in
that when you exceed the free tier, the charge
| | 01:09 | is per query based on the amount of work done.
| | 01:12 | No queries means no charge, so it's good for keeping
costs down in a passive or low-volume application.
| | 01:18 | Amazon SimpleDB like many of the other
Amazon services has its own vocabulary.
| | 01:22 | A domain is a collection of data items with
the same structure; each data item has unique
| | 01:28 | identifier a name which can be
thought of as a primary key.
| | 01:33 | Items can have up to 256 distinct attributes,
which can be kind of conceptualized as column
| | 01:39 | headers in a spreadsheet.
| | 01:41 | Attributes store values. To get perspective
into the relationship, items have attributes
| | 01:46 | and each attribute can have a value.
| | 01:49 | Values can be thought of as
individual cells in a spreadsheet.
| | 01:52 | All the values are stored in the same format
which is a UTF-8 string which is basically text.
| | 01:58 | It makes a lot of sense for me to demonstrate
the Amazon SimpleDB interface, but there is
| | 02:02 | kind of a little bit of a problem.
| | 02:04 | It's not in the AWS Management Console.
| | 02:07 | There is an official JavaScript scratchpad
which allows queries to be run data and data
| | 02:10 | to be traversed, but it's not supported and as of
this writing it is broken in browsers like Chrome.
| | 02:17 | This doesn't mean that SimpleDB is abandoned.
It is still a good service, it just doesn't
| | 02:20 | come with a user interface.
| | 02:22 | So I'll use an alternative. There are too good and
free ones; SdbNavigator which is an extension
| | 02:27 | for Chrome is well
maintained and performs well.
| | 02:30 | I'll be demonstrating with that.
| | 02:32 | If you have are Firefox, SdbTool, which
is an extension is also available.
| | 02:37 | I'm going to switch to a Chrome browser and
navigate to https://chrome.google.com/web
| | 02:45 | store/category/extensions.
| | 02:51 | I'm going to search for SdbNavigator, click
ADD TO CHROME, click Add and it's been added.
| | 03:04 | A new icon will be added in the upper
right. Click on it to open the interface.
| | 03:09 | Enter in the Access Key and the Secret Key,
then select the Region that the EC2 instance
| | 03:23 | is in, which will be US-
East and click Connect.
| | 03:29 | By default, there are no domains to store data
in, so let's create one, click Add domain.
| | 03:35 | It will ask for a name.
| | 03:36 | I'm going to use it to store information about
watermarked images, so just call it watermarkedimages.
| | 03:44 | When I select the domain and run
the default query it will be empty.
| | 03:47 | There is only one property, itemName.
| | 03:50 | This is the default and the
primary key for the record.
| | 03:53 | Additional attributes which are known as
properties here can be manually added in this interface.
| | 03:57 | However, that's kind of a pain.
| | 03:59 | As an alternative, the API call put attributes
to add or update a record can just arbitrarily
| | 04:05 | create a new attribute on the fly.
| | 04:07 | This attribute is created for the entire domain,
and then the attribute is added to the item
| | 04:11 | with a specified value.
| | 04:13 | New attributes are not
automatically added to any existing items.
| | 04:17 | Back in the browser, I'm going
to demonstrate this behavior.
| | 04:20 | Notice that I can't currently add any records
until I add an attribute. Click Add property
| | 04:25 | and name it watermark.
| | 04:29 | Now I can click Add record.
| | 04:32 | For the name I'll call it first and for the
value of watermark I'll say n, click Update
| | 04:39 | to save the item.
| | 04:41 | Click Add property again,
and this time call it height.
| | 04:45 | Notice that for the test
item, the height is not set.
| | 04:49 | Click Add record and we'll call
it last and with a height of 100.
| | 04:57 | This time watermark has no attribute.
| | 05:00 | Delete both these records by clicking on the
check box to next to the item name and clicking
| | 05:04 | Delete record, say Yes to the confirmation.
| | 05:08 | I now have a place to store information about
individual images so we'll add it to the configuration
| | 05:14 | of the watermarking application.
| | 05:16 | Switching to the terminal with a remote connection
and reconnecting if necessary enter the following
| | 05:20 | command; sudo nano -w /var/www/html/config.inc.php.
Navigate to the SimpleDB domain and we'll
| | 05:33 | type watermarkedimages. Press Ctrl+X to
exit and then Y to save.
| | 05:40 | The next thing I'm going to need is a mechanism that
can build a queue of images that need watermarking.
| | Collapse this transcript |
| Managing workflow with the Simple Queue Service| 00:00 | In a Cloud application, functionality is often
distributed between many different components.
| | 00:05 | Managing communication between these components
can be difficult without a centralized mechanism.
| | 00:10 | In the watermarking application, I've
purposely decoupled the act of uploading images and
| | 00:15 | the act of placing a watermark on the image to
simulate multiple servers with individual roles.
| | 00:20 | To manage the communication between the components, I
am going to use the Amazon Simple Queue Service.
| | 00:25 | The Simple Queue Service, or SQS, is a
distributed queue system that stores messages between
| | 00:30 | decoupled components that don't have or
don't need a direct communication mechanism.
| | 00:35 | There are three distinct roles in this kind of
system: Producer, which generates messages,
| | 00:40 | in the watermarking application this will
occur when the file is uploaded; the queue,
| | 00:45 | which is the simple queue service itself,
it's a temporary repository for the messages,
| | 00:49 | and finally the consumer, that
reads and deletes messages.
| | 00:53 | In the watermarking application, this will
be the component that actually places the
| | 00:56 | watermark on the image.
| | 00:58 | To illustrate the process,
here's the watermarking workflow.
| | 01:01 | First, a file is uploaded
that needs watermarking.
| | 01:06 | A message is then sent to the queue, then
the watermarker gets the message from the
| | 01:12 | queue, making it invisible to other
consumers and starts working on watermarking.
| | 01:18 | Finally, when processing is complete, the
message is deleted and the queue is clear.
| | 01:23 | There is a number of reasons to use a queue.
| | 01:26 | Primarily, it's a buffer which helps resolve
issues when a producer is producing faster
| | 01:30 | than the consumer can handle.
| | 01:33 | Another scenario is when there's intermittent
access to a consumer, such as a network disruption,
| | 01:37 | time limited availability, or a system failure.
| | 01:40 | In all of these instances, the producer
doesn't know or care what the consumer is doing. It
| | 01:44 | just prepares the messages.
| | 01:46 | When the consumer is ready, it just
takes the next few messages from the queue.
| | 01:50 | This starts a countdown timer where the
messages are invisible to any other consumer.
| | 01:54 | This prevents multiple consumers
from claiming the same message.
| | 01:58 | Upon task completion, the consumer deletes
the message completely, indicating success.
| | 02:02 | If the consumer is unable to complete the task,
the message times out, is available again
| | 02:07 | for another consumer to process.
| | 02:08 | There is a 10 message limit to
producing and reading messages per call.
| | 02:13 | I am going to demonstrate how to create a queue
which will be used for the watermarking application.
| | 02:18 | Open the browser and navigate to the
Management Console, if you're not already there.
| | 02:22 | If you are in the menu, go to
Services > App Services and SQS.
| | 02:29 | There are no queues so I am
going to create a new one.
| | 02:32 | The queue name must be a combination of up to 80
letters and numbers, underscores and hyphens.
| | 02:37 | Let's call the queue images-
to-watermark with hyphens.
| | 02:44 | There's a number of queue attributes, the
visibility timeout which is the amount of
| | 02:47 | time that a message is not visible to other
consumers, the message retention period which
| | 02:51 | deletes unclaimed messages, maximum message
size in kilobytes, the delivery delay in seconds
| | 02:57 | and receive time, which allows consumers to
wait before closing the connection if there
| | 03:01 | aren't any media messages to claim.
| | 03:04 | No changes are needed, so
just click Create Queue.
| | 03:08 | Once the queue is created, details about the queue
configuration are shown at the bottom of the screen.
| | 03:13 | There's also a tab for permissions which by
default only allows the queue owner who is
| | 03:16 | me, to access it.
| | 03:18 | I am going to send a
message through the queue.
| | 03:21 | At the top, click Queue
Actions and then Send a Message.
| | 03:26 | For the text, I'll enter a
fake file name test.jpg.
| | 03:32 | Click Send Message.
| | 03:35 | Notice that there is a unique identifier for the
message and an MD5# of the body for consistency checks.
| | 03:41 | Click Close then click the Refresh button.
| | 03:44 | There is now a message in the queue.
| | 03:47 | To read it, click Queue Actions
again, then View/Delete Messages.
| | 03:51 | Click Start Polling for Messages to
start reading from the front of the queue.
| | 03:55 | The message has been shown.
| | 03:57 | Note that there is a
countdown timer at the bottom.
| | 03:59 | This is the visibility of the message.
| | 04:01 | I am going to let it time out which will
allow the message to be claimed by others.
| | 04:08 | Click Start Polling for Messages again.
| | 04:11 | This time the Receive Count is set to 2.
| | 04:13 | This indicates that the message was not
deleted and has been claimed a second time.
| | 04:18 | This time, delete the message by clicking
the check box under Delete and then delete
| | 04:23 | one message. Am I sure? Yes.
| | 04:27 | Now that the queue has been created, I am
going to switch to the terminal with the remote
| | 04:30 | connection, reconnecting is necessary and update the
configuration; sudo nano -w /var/www/html/config.
| | 04:44 | For the queue name, same as I put in the
queue, images-to-watermark, save and exit.
| | 04:55 | Now that I have a mechanism for communicating,
I'd like to send a message to myself when
| | 04:58 | processing is complete.
| | Collapse this transcript |
| Pushing notifications with the Simple Notification Service| 00:00 | The Amazon Simple Notification Service, or SNS for
short, is a web service for sending notifications.
| | 00:07 | SNS allows push notifications to be sent,
meaning that whatever is getting the notifications
| | 00:12 | doesn't need to check SNS for them.
| | 00:15 | Notifications can be sent to a number of
recipients, including http for servers, email and SMS
| | 00:21 | text messages to phones.
| | 00:24 | Amazon SNS is intended for small notifications
and the features of this service reflects that.
| | 00:29 | Starting with a maximum size of messages which
is 64 kilobytes, the restriction is structured
| | 00:35 | to be optimal for simple
text and tiny structures.
| | 00:38 | Each subscriber must confirm the subscription
to notifications which can be a bit cumbersome,
| | 00:42 | especially as this includes servers which need
a mechanism for confirming this description
| | 00:47 | in addition to being able
to receive a notification.
| | 00:50 | The notifications can be sent in one or two
formats: plain text, emails and SMS and JSON
| | 00:55 | encoding which is optimal
for server communication.
| | 00:58 | SNS in good for internal status notifications
in triggered events such as a payment problem
| | 01:04 | being reported to an
administrator or server triggers.
| | 01:08 | It's not good for mass emailing users due
to subscription confirmations, formats and
| | 01:12 | other limitations.
| | 01:14 | The Simple Notification Service has its own
set of terminology, but in comparison to other
| | 01:19 | services, it's fairly small.
| | 01:20 | A topic is the name of a group of subscribers.
| | 01:24 | Topics are typically named for the subject in
the notifications or some sort of underlying
| | 01:28 | event type like server failures.
| | 01:31 | As mentioned before, clients
can subscribe to a topic.
| | 01:35 | A topic owner, the account that created
the topic, can directly subscribe clients.
| | 01:40 | Regardless of how they subscribe, the
client must always confirm the subscription.
| | 01:45 | Amazon SNS subscriptions are interesting as
it really highlights how these messages are
| | 01:50 | not intended for end-users.
| | 01:52 | Subscribers to SNS notifications need
to specify the protocol and endpoint.
| | 01:57 | The protocol can be things like HTTP, email,
SNS and others, while the endpoint is the
| | 02:02 | recipient address; this includes
an email, a URL or SMS number.
| | 02:08 | When the description request is received, a
confirmation is sent for explicit opt-ins
| | 02:12 | by replying or clicking a link.
| | 02:15 | If unconfirmed, no notifications will be sent and an
unconfirmed subscriber will be removed in three days.
| | 02:22 | Notifications are not archived so they can't be
reviewed later, so take that in a consideration
| | 02:27 | when using the service.
| | 02:28 | There is a number of different steps in the
SNS workflow so I'll go over at a high level.
| | 02:33 | First, the topic is created.
| | 02:36 | A recipient subscribes and
confirms their notification.
| | 02:40 | Notifications are then published which
trigger SNS to deliver the notifications.
| | 02:44 | Now that I've provided some context into how
Amazon SNS works, let's configure it for use
| | 02:49 | with the watermarking application.
| | 02:52 | Switching to the browser, go
to the Management Console.
| | 02:55 | From the menu, go to Services > App Services >
SNS. As this is the first time that I have
| | 03:02 | used this system, there are no topics created.
| | 03:05 | I can see the number of SNS topics that I
managed along with a number of subscriptions
| | 03:09 | in the upper right-hand corner.
| | 03:10 | I am going to create a new topic
for the watermarking application.
| | 03:14 | So click on Create New Topic.
| | 03:17 | Topic names are 256 alphanumeric characters,
meaning A through Z, 0 through 9, additionally
| | 03:23 | hyphen and underscore included.
| | 03:25 | I am going to use SNS to notify
myself when there is a bad uploaded file.
| | 03:28 | So I'll give it a logical
name, watermark-bad-upload.
| | 03:35 | For the Display Name, I'll give it a descriptive
short name, WM for watermark, BAD for problem
| | 03:42 | and UL for upload.
| | 03:44 | Remember, these notifications aren't for
end-users, so briefness is just fine.
| | 03:49 | When ready, click Create Topic.
| | 03:53 | Basic information about
the topic is now shown.
| | 03:57 | Topic Amazon Resource Name, or ARN, is a
unique identifier that is used programmatically.
| | 04:02 | Topic owner, region and
display name are self-explanatory.
| | 04:06 | The list of subscribers which is
currently empty is shown below.
| | 04:11 | Clicking on all topic actions,
a number of options are shown.
| | 04:16 | Publish sends messages which
I'll demonstrate shortly.
| | 04:20 | Topic Policy gives control over who
can publish and subscribe to the topic.
| | 04:24 | The topic delivery policy controls the number
of retries, seconds of delay and rate limiting.
| | 04:31 | It's good to have an audience for
notifications even if it's an audience of one, so click
| | 04:35 | Create New Subscription.
| | 04:38 | This form has two options, the first, protocol,
determines how messages will be delivered.
| | 04:43 | I want to send myself a text message whenever
there is a non image uploaded, so I'll select
| | 04:47 | SMS as the protocol.
| | 04:50 | The end-point indicates where the messages
are going which be my producer cell phone
| | 04:54 | number, click Subscribe.
| | 04:59 | A text message will be sent from
Amazon to confirm the subscription.
| | 05:03 | Follow the directions which are to reply
with a text saying yes and the display name.
| | 05:07 | So it will be yes WMBADUL.
| | 05:12 | Once confirmed, close the message,
click Refresh on the AWS page.
| | 05:17 | An explicit subscription ID is shown along
with a protocol endpoint and subscriber.
| | 05:23 | To test the system, click Publish to Topic.
| | 05:29 | The first input is the subject which
should be left off for SMS messages.
| | 05:33 | If it's included, it will be
shown as part of the message.
| | 05:36 | The larger box for message is self-explanatory,
but the option beneath allows different messages
| | 05:41 | per protocol, such as a longer more detailed
message via email and a simple summary as text.
| | 05:47 | Leave the subject blank and for the message,
I'll type "There was a bad upload from IP..."
| | 05:57 | Click Publish Message, it will show a
confirmation and just click Close.
| | 06:02 | In a moment, a message will be sent.
| | 06:06 | SMS messages will append the display
name followed by the message itself.
| | 06:10 | Now that I have the topic, I can
perform the final piece of configuration.
| | 06:13 | I am going to switch to the terminal with a
remote connection and reconnect if necessary.
| | 06:18 | Then I'll update the configuration: sudo nano
-w /var/www/html/config.inc.php, and for the
| | 06:30 | SNS_TOPIC we'll do watermark-
bad-upload, exit and save.
| | 06:39 | The notification will now be published if
someone uploads non-image file to the watermarker.
| | 06:44 | That's not practical in the long
run, but it's good for testing.
| | 06:47 | I have all the services necessary for the
watermarker, so it's time to put it all together.
| | Collapse this transcript |
| Putting it all together| 00:00 | Throughout this chapter, I've been assembling a
suite of services in order to build an image
| | 00:04 | watermarking application.
| | 00:06 | Now that they've been fully
configured, I'll demonstrate the result.
| | 00:10 | In the browser, navigate to
the DNS name of the EC2 server.
| | 00:15 | Make sure you reload the page.
| | 00:17 | All four configuration checks
should now be shown in green.
| | 00:20 | If not, go back to the correlating
segment and follow the directions at the end.
| | 00:24 | In the menu at the top, click Show.
| | 00:27 | This will show all of the watermarked images
that have been both uploaded and processed.
| | 00:31 | Right now, we don't have
anything to show, click Upload.
| | 00:35 | This is a very simple
interface for uploading an image.
| | 00:38 | Click Choose File, or Browse depending on the
browser, and select an image to be watermarked.
| | 00:43 | So I'm going to go into my Downloads folder
and I have the two exercise files for the
| | 00:47 | course which is Beach.jpg and NotAnImage.text.
| | 00:51 | I'm going to select the Beach.jpg,
click Upload Image.
| | 00:58 | Upon completion, there should be four success
messages: Uploaded image to Amazon S3, Item
| | 01:04 | added to Amazon SimpleDB, Filename added to
SQS queue for processing and Uploaded file
| | 01:09 | metric added to CloudWatch.
| | 01:11 | In a different tab, go to the Management
Console then click on S3, click on your bucket and
| | 01:20 | the newly uploaded file should be shown.
| | 01:23 | Go over to SdbNavigator and click Run query.
| | 01:29 | An item with a watermark value of n along
with a height and width will be shown.
| | 01:34 | Returning to the Management Console, go to SQS, the
images to watermark queue has a message available.
| | 01:42 | There is a lot of individual components to
keep track of, and AWS provides facilities
| | 01:46 | to keep track of how
each service is operating.
| | 01:49 | Amazon CloudWatch automatically monitors AWS
resources out of the box meaning no additional
| | 01:54 | configuration is required.
| | 01:55 | It can also record custom metrics which I've added
too to the watermarking application to demonstrate.
| | 02:01 | It does take a couple of minutes before
the metrics are reflected in the interface.
| | 02:05 | So if a metric isn't immediately
visible, wait a few minutes and try again.
| | 02:10 | Go to CloudWatch which is found in Services >
Deployment & Management > CloudWatch.
| | 02:18 | There are a number of metrics available out
of the box and the application also logs a
| | 02:21 | couple of metrics on its own.
| | 02:23 | Click View Metrics, EBS, EC2,
SNS and SQS metrics are shown.
| | 02:37 | To see details, click on
NumberOfMessagesSent under the SQS queue.
| | 02:43 | I can see that there is activity.
| | 02:45 | I can also change the Time Range by clicking on
Zoom to only seen what's happened in the last hour.
| | 02:51 | I can filter the metrics by using the search
bar at the top which I'm going to do now.
| | 02:55 | Type watermark and click Search. Clicking on
the row for UploadedFiles, I can see that
| | 03:04 | I uploaded an image.
| | 03:06 | Switching back to the watermarking application,
click Process. It should show the name of
| | 03:15 | the image being processed and our message
receipt handle, then a series of a success
| | 03:19 | messages: image downloaded from S3, watermarked
image being uploaded S3, message deleted from
| | 03:26 | SQS, item updated in Amazon SimpleDB, and the
Processed file metric added to CloudWatch.
| | 03:33 | If you go to Show, the image that I uploaded
is shown processed with the watermark on it.
| | 03:40 | Now I'm going to test notifications, so go
back to Upload, then Choose File and then
| | 03:47 | I'm going to upload NotAnImage.text,
click Open and then Upload Image.
| | 03:54 | This time an error message will be shown and
a notification will be sent via Amazon SNS.
| | 03:59 | Throughout this chapter, I've explored a number of
major and common application platform services.
| | 04:05 | I can figure the software developer kit with
AWS credentials, then I stored objects in
| | 04:10 | Amazon S3, I created items with
attributes in Amazon SimpleDB for record storage.
| | 04:16 | I managed the application workflow
using this simple Queue service.
| | 04:21 | On a bad upload, notifications were
sent with a simple notification service.
| | 04:25 | And finally, I put everything together and
demonstrated how the individual pieces fit
| | 04:29 | within the watermarking application.
| | 04:32 | Given the variety of options available, a
good question is whether or not AWS is a good
| | 04:36 | fit for your needs.
| | 04:37 | I'll explore that in the final
chapter along with where to go from here.
| | Collapse this transcript |
|
|
ConclusionDetermining if AWS is a good fit| 00:00 | Amazon offers a wide variety of web services
from standalone application components to
| | 00:05 | enterprise grade parallel data processing.
| | 00:07 | However, just using AWS isn't going
to auto-magically fix your problems.
| | 00:12 | Depending on your needs, it may be a
square peg in a round hole; not a good fit.
| | 00:17 | A good exercise is to determine at a high
level what your actual needs are, then try
| | 00:21 | to map them to the available services.
| | 00:25 | Their pricing strategy, which in general is
pay for what you use, offers flexibility which
| | 00:29 | is especially good for resources that are
only needed for short periods of time.
| | 00:34 | Consider the overhead of purchasing,
storing and maintaining hardware as well.
| | 00:39 | CloudWatch also offers alerts on billing
thresholds, which can be a good canary in a coal mine
| | 00:43 | if a resource starts being
utilized more than expected.
| | 00:47 | Some AWS services can be used
independently like using S3 for file storage.
| | 00:51 | But if I were to use the relational database
service with a remote web server, performance
| | 00:56 | would really suffer.
| | 00:57 | While this doesn't mean it's all or nothing,
consider the service interdependencies and
| | 01:01 | performance, as a transition from an
existing solution to AWS, may be more involved.
| | 01:07 | Using Amazon Web Services and other Cloud
service providers requires multifaceted trust,
| | 01:13 | in particular that your proprietary and confidential
code and data remains private, secure and reliable.
| | 01:19 | With that said, Amazon Web Services is
a well-established and known quantity.
| | 01:24 | As an example one of the regions
it's available is the AWS GovCloud.
| | 01:28 | It was designed for US government agencies
and their clients to address regulatory and
| | 01:32 | compliance requirements.
| | 01:33 | If it's good enough for the US
government, it might be an option for you.
| | 01:37 | While I don't like using this phrase, cloud
services does represent a paradigm shift.
| | 01:42 | Some may find resistance within their own
organizations from both those who are operating
| | 01:46 | on stereotypes and aren't really familiar
with cloud services, and those who have a deep
| | 01:50 | knowledge of the risks and benefits of working with
cloud services and may have had a bad experience.
| | 01:56 | For example, one of my clients who provides
cloud services was criticized in an internal
| | 02:00 | security audit for using cloud services.
| | 02:04 | This type of policy inconsistency is
surprisingly common in large organizations.
| | 02:09 | As I learned from a previous employer, you
can't turn a cruise ship on a dime, but you
| | 02:12 | can steer it gradually until it's
facing in the opposite direction.
| | 02:17 | In short, this means change can be difficult
for a large organization, but slow and steady
| | 02:21 | persistence will see positive results.
| | 02:24 | Finally, take some time and evaluate other
solutions as Amazon is not the only Cloud
| | 02:29 | service provider.
| | 02:30 | Ultimately, you are the one who's in the best
position to be able to determine what service
| | 02:34 | provider, if any, is able to fulfill your needs.
| | 02:37 | Research, comparison, and due diligence
will save you potential headaches and money.
| | 02:43 | Now if you wanted to continue learning about
Amazon Web Services, what are some directions
| | 02:46 | you could take?
| | Collapse this transcript |
| Where to go from here| 00:00 | This course offered a broad survey of Amazon
Web Services, but didn't cover every facet
| | 00:05 | to its fullest extent.
| | 00:07 | Reading the friendly manual
is a great way to learn more.
| | 00:10 | I'm going to share three useful links.
| | 00:12 | The first is for official documentation which
is a collected gateway to each of the services
| | 00:16 | docs, the second for articles and tutorials,
abstractly walks through a number of examples
| | 00:21 | of each of the services.
| | 00:23 | The final URL is for code examples and software
developer kits for integrating AWS into applications.
| | 00:30 | I recommend taking a moment and reading through
the source code of the watermarking application,
| | 00:34 | even if you're not a PHP developer.
| | 00:36 | I made a point of clearly commenting the code
to explain what's going on and why and wrote
| | 00:41 | it in a procedural step-by-step
style to improve readability.
| | 00:44 | Feel free to use the logic in
your own applications as well.
| | 00:48 | The CloudWatch monitoring system also
provides configurable alerts that react in metrics
| | 00:52 | that you specify.
| | 00:54 | Try creating some alerts like ten uploads in
a minute triggering a notification to you.
| | 00:59 | Another way to learn more is just by doing it.
| | 01:02 | Set up your own web server and deploy either your
own application or an open-source application.
| | 01:07 | Try using Amazon relational
database service instead of MySQL.
| | 01:11 | Finally, when you're done experimenting with
the watermarking application, remember to
| | 01:15 | turn it off or destroy it
completely using the Management Console.
| | 01:18 | This includes the EC2 server, the S3 bucket,
the SimpleDB domain which you are going to
| | 01:24 | have to use the browser client to do,
the Amazon SQS queue and the SNS topic.
| | 01:30 | This way you're not consuming
resources that aren't actively being used.
| | 01:34 | As I've learned, a
clean site is a happy site.
| | Collapse this transcript |
| Farewell| 00:00 | Amazon Web Services is a
fascinating topic with a lot of depth.
| | 00:04 | There are areas that I haven't even touched
like parallel processing, which offer interesting
| | 00:08 | and extreme solutions.
| | 00:10 | In my lifetime, computer science and hardware
has improved and grown so much and continues
| | 00:15 | to change at an amazing pace.
| | 00:17 | By combining these innovative technologies in
new ways, cumbersome solutions become lightweight,
| | 00:22 | modular, and elegant which allows
focus on newer, greater challenges.
| | 00:26 | I appreciate your time and I hope you
enjoyed watching this course as much as I enjoyed
| | 00:30 | writing it and recording it while
working with the team at lynda.com.
| | 00:34 | Please take a moment to provide feedback
through the course homepage on lynda.com.
| | 00:38 | Thank you.
| | Collapse this transcript |
|
|