navigate site menu

Start learning with our library of video tutorials taught by experts. Get started

Up and Running with Amazon Web Services

Up and Running with Amazon Web Services

with Jon Peck

 


Discover how Amazon Web Services (AWS) can be leveraged to deploy and scale your web applications. Author Jon Peck demonstrates how to build a simple application leveraging the Amazon cloud services while introducing the wide variety of products and services provided with AWS.

This course starts with an overview of the foundational services, such as Amazon EC2 for virtual servers, Amazon S3 for online data storage, and Amazon RDS for a scalable database solution. Plus, explore how application services such as the Amazon Simple Notification Service can reduce overhead. Jon combines these services in the final chapter, where he builds, deploys, and monitors an application.
Topics include:
  • What is Amazon Web Services?
  • Understanding the AWS terminology
  • Exploring the foundation services
  • Using tools for managing and administration of AWS
  • Signing up for services
  • Launching and managing EC2 instances
  • Configuring the software development kit (SDK) with AWS credentials
  • Storing objects in Amazon S3
  • Pushing notifications
  • Managing the workflow

show more

author
Jon Peck
subject
Developer, Cloud Computing
software
AWS
level
Intermediate
duration
1h 43m
released
Feb 19, 2013

Share this course

Ready to join? get started


Keep up with news, tips, and latest courses.

submit Course details submit clicked more info

Please wait...

Search the closed captioning text for this course by entering the keyword you’d like to search, or browse the closed captioning text by selecting the chapter name below and choosing the video title you’d like to review.



Introduction
Welcome
00:04Hi! I am Jon Peck and welcome to Up and Running with Amazon Web Services.
00:08In this course, we'll explore how Amazon Web Services can be used to build applications.
00:14I'll start with an explanation of what cloud computing is, then go over the many products
00:18and services within AWS, including Amazon EC2 and the Simple Notification Service.
00:25Throughout the course we'll assemble the services necessary to power a photo watermarking application in the cloud.
00:32We'll cover foundational and application platform services along with many other products that
00:36will help you get rid of the overhead of setting up and managing cloud services that are at
00:40the heart of many applications.
00:42Let's get started!
Collapse this transcript
What you should know
00:00The first chapter of this course describes what cloud services are then goes into a survey
00:04of what the various Amazon Web Services can do.
00:08For broader overview of what cloud computing is, I recommend watching Cloud Computing First
00:13Look with David Rivers, here in the lynda.com online training library.
00:19The second and third chapters get a bit deeper with practical demonstrations of how to set
00:23up a web application using Amazon Web Services.
00:26This course doesn't teach programming or system administration, but if you have a general
00:30understanding of some of the high-level concepts, it'll be especially helpful.
00:34I'll be demonstrating some commands on a Linux server, but as the emphasis is on the ecosystem
00:39of Amazon Web Services and not on systems administration, I'll describe what the commands
00:43are doing but not describe in depth how they work.
00:47If you need a more in-depth tutorial on web server administration I suggest Up and Running
00:52with Linux with PHP Developers here in the lynda.com online training library.
00:57With the second and third chapter you're going to need an SSH client for remotely connecting to servers.
01:03For Mac and Linux users, SSH is already installed and available through the terminal.
01:07If you are using Windows, the free program PuTTY can be used to connect, which is available
01:11from the official website at greenend.org.uk.
01:16Now something to keep in mind is that Amazon Web Services is a commercial service that
01:20costs money to use. You're going to need a credit card to sign up and they're going to
01:24verify your contact information.
01:27With that said, there's a free level for new customers that have a reasonable threshold
01:30that is good for this kind of experimentation.
01:34This course is designed to consume resources within that free level, but ultimately you
01:38are the one who's responsible for managing services within your own AWS account.
Collapse this transcript
Using the exercise files and assembling an image watermarking application
00:00In this course, I'll be describing and demoing how Amazon Web Services can be used to provide
00:05services within an image watermarking application.
00:09As this is not a programming course, I have written the entire application ahead of time
00:13in order to demonstrate the interaction of these services.
00:17You're going to need a little bit of configuration over a couple of files which I'll walk-through
00:21step-by-step as I demonstrate the corresponding service.
00:24Don't worry. You'll get a chance to get your hands dirty when focusing on the AWS management interface.
00:30The image watermarking application is very basic, but it's functional. From a high level
00:35the workflow with the application is:
00:37the user uploads an image to the hosted virtual server.
00:41The program will validate the uploaded file to ensure that it's an image, and if it's not,
00:45send a notification.
00:47If it is valid, I'm going to store the image using Amazon Web Services Storage and save
00:52a record of the image metadata, such as the height and width using a database.
00:57After the image has been uploaded, we'll process the image which will add a watermark. This
01:02includes updating the stored image that has the watermark and updating the database to
01:06indicate that the image has been watermarked.
01:09Finally, we'll show all the watermarked images that were listed as watermarked in the database.
01:15The exercise files for this course include the source code of the image watermarking
01:18application, which I wrote in PHP.
01:21There are also two files for testing which are an image which will validate and a text
01:26file which obviously won't.
01:28Without further ado, let's introduce Amazon Web Services.
Collapse this transcript
1. Introduction to Amazon Web Services (AWS)
Cloud computing: Beyond the buzzwords
00:00In this chapter I'm going to introduce cloud computing in a practical way, including key
00:05terms and decrypting some of the acronyms in order to describe what exactly Amazon Web Services is.
00:11I'll then go into a high-level survey of the three major tiers of Amazon Web Services,
00:15which are foundation, application platform and management and administration.
00:21I'll go through options for sourcing applications, then give a brief overview of the history
00:25of Amazon Web Services to give context about how it's grown.
00:29Finally, I'll combine physical with the virtual and discuss why location is important.
00:36Cloud Computing is one of those things that has been mentioned in shiny literature repeated
00:39at conferences and used as a universal solution to all your problems.
00:44Is your system too slow? Cloud computing.
00:46Is coffee not strong enough? Cloud computing.
00:50Before I go any further, I'm going to define exactly what cloud computing is and describe
00:55how it can be used.
00:57Cloud computing refers to the use of a service from hardware and software resources that's
01:02delivered by a network, which in most cases is the Internet.
01:06The hardware resources are the physical servers and infrastructure, which includes storage,
01:11cooling and networking.
01:12The software resources provide the services themselves which is the consumable product
01:17that comes in many different forms that I'll discuss in a moment.
01:21Cloud computing typically has a number of characteristics.
01:25First is the delegation of physical and management overhead.
01:29With cloud computing you can select a service to use. You don't have to purchase, set up, or
01:33maintain the servers and infrastructure. You have outsourced this to the cloud service provider
01:37who has already done this ahead of time.
01:39Now cloud computing is both modular and compartmentalized, which allows for a system to be built out
01:45of smaller distinct and interchangeable components that work together.
01:49As a result, these services are highly elastic, which allows for dynamic allocation of resources;
01:55growing and shrinking as needed.
01:57An example of elasticity is providing more databases during peak demand than removing
02:02them as traffic dies down.
02:04Additionally, the modularity provides a resilient infrastructure in case of an individual component failure.
02:11In those situations a replacement is immediately available, which reduces overall downtime.
02:16Finally, between the elasticity and the resilience, there is an illusion of an infinite supply
02:22of resources available.
02:24There is of course practical limitations, but for all intents and purposes, there is no limit.
02:29Keeping these characteristics in mind, cloud computing can easily be compared to an electrical
02:33utility, where the service, which is electricity, is delivered via network or the power grid,
02:39and the responsibilities are delegated to the utility, including infrastructure, electricity
02:44production and maintenance.
02:46Now there are many different types of cloud computing service models, which each of them
02:49have been designated with a different acronym, which can also used as a buzzword.
02:53I'll focus on the four primary service models that have been recognized by the International
02:57Telecommunications Union, each of which, which is found in some form within Amazon Web Services.
03:03To help remember these four, I use the word SNIP.
03:07The first letter S is for Software as a Service or SaaS, which refers to application software
03:13installed and operated by cloud providers and used by clients.
03:18The providers manage the infrastructure and platform and the clients just use the software.
03:22For example, both the Google apps and iCloud from Apple are Software as a Service.
03:28The next letter N it is for Network as a Service or NaaS, where network and transport capabilities
03:34are provided, which is traditionally as a virtual private network and bandwidth.
03:38This is introduced as a distinct service model in 2012, but it's generally not needed by
03:43basic cloud users.
03:45The third letter I is for Infrastructure as a Service or IaaS and it's the most basic
03:51model where a computer itself is the service.
03:55IaaS typically provides a hosted virtualized machine where a single physical machine emulates
04:00multiple computing environments that behave like individual computers.
04:04In short, it means to get to use an entire computer's resources without needing a physical
04:09computer sitting right there on your desk or in a rack somewhere.
04:12A VPS or Virtual Private Server is a common example of IaaS.
04:17And finally P, for Platform as a Service or PaaS, which is a complete framework that can
04:23be used to host applications written by clients.
04:27The platform is typically a web server solution stack, including the operating system, web,
04:32and database server and programming language execution environments where you can now run
04:37something that has been written in a particular language.
04:39And Infrastructure as a Service can be configured and packaged as a platform.
04:44An example of this is PHP Application Server.
04:49To review: by a remotely delivering services and hiding the complexity and management of
04:54the hardware and software resources, cloud computing can be used to reduce costs and
04:58overhead. This allows clients to focus on developing core products and services, rather
05:04than dealing with managing resources.
05:06Now something to be aware of when you're outsourcing the hosting and maintenance of your stuff,
05:11you're entrusting the cloud service provider with your user data, software, and so forth.
05:17With that said, cloud service providers such as Amazon, have become a ubiquitous and trusted
05:21mechanism for securely delivering quality.
05:24So be sure to evaluate your own security and privacy needs before making a decision about
05:29whether or not cloud services are appropriate.
05:32As a former systems administrator, I can distinctly feel the emotional chill of the early-morning
05:37notification of a failed server that needed maintenance, which depending on the severity
05:42of the problem, could mean that I was in a drive out to the datacenter in the middle of night.
05:46Being able to delegate that kind of responsibility to a service and being able to automatically
05:51deploy replacement, that's the kind of thing that makes cloud computing desirable.
05:55I have gone over these high-level concepts and buzzwords and definitions of just what
06:00cloud computing is in order to give you a foundation of understanding that will help
06:04demystify Amazon Web Services.
Collapse this transcript
What is AWS?
00:00Amazon Web Services is a cloud computing platform that consists of a collection of web services
00:05that have been provided by Amazon.com.
00:09There are several dozen distinct services available in various stages of release. Some
00:13are fully released with service level agreements, some of them are in open beta and others are
00:18announced, but are not fully available to the general public.
00:21In this course, I'll focus on services that have been released and are available for use.
00:26Throughout the Amazon Web Services Home pages there are different charts and lists of services
00:31and some of them actually kind of conflict with one another and aren't completely aligned.
00:35I preferred this particular chart found within the AWS documentation, because it's clear
00:40and easy to read.
00:42At first glance there's a lot to absorb, so I'll walk through each product here with a
00:45practical high-level and highlight important services.
00:49I'll go into greater detail about a number of these services and demonstrate some of
00:53the basic ones as part of the image watermarking application I'll be assembling later in the course.
00:59Starting at the bottom, the proverbial backbone of all Amazon Web Services is the Global Infrastructure,
01:04which refers the worldwide geographical distribution of all their systems.
01:10There are several hundred Availability Zones, which can be practically thought of as a datacenter.
01:15It's a little bit more complex than that, but I'll give greater detail in a moment.
01:20Amazon Web Services has nine distinct geographic regions where their servers are hosted.
01:25Northern Virginia, which is the default, Oregon, California, Ireland, Singapore, Tokyo, Sydney,
01:32Sao Paulo and AWS GovCloud for the US government.
01:37In addition to those regions, there are roughly 40 edge locations, which are places where
01:41the small and large objects are served from as part of their content delivering network.
01:46In general, it's best to locate services close to where the primary users are, in order to
01:51optimize performance and availability. I'll discuss locations in greater depth in an upcoming segment.
Collapse this transcript
Exploring the foundation services
00:00The next tier contains Foundation Services.
00:03These services are in four categories: Compute, Storage, Database and Networking.
00:10There are two Compute Services.
00:12The first, Amazon Elastic Compute Cloud, or EC2, are basically virtualized computers, also
00:18known as Virtual Private Servers.
00:21EC2 is a textbook example of an Infrastructure as a Service.
00:25I'll be using EC2 to host the watermarking application.
00:28An optional companion service called Auto Scaling allows EC2 instances to be dynamically
00:34added or removed in response monitored resource utilization.
00:38This allows the amount of resources that your application uses to be scaled up or down based on demand.
00:45The next category, Storage, has a number of services that leverage Amazon's distributed network.
00:51First is the Amazon Simple Storage Service or S3.
00:54In short, it can be used to store and serve any data in any format in just about any size,
01:01anywhere from just 1 byte to 5 TB.
01:04S3 is typically used for images, stylesheets, non executable files and other types of static content.
01:11It can also be used for archives and file storage.
01:14The Simple Storage Service does not need EC2 to function.
01:18I'm going to be using S3 to store images in the watermarking application.
01:23In contrast, the Amazon Elastic Block Store, or EBS, provides persistent storage for EC2
01:29instances which allows application storage to be separated from the virtual machine.
01:34This is useful if any EC2 instance experiences a failure, goes down.
01:39The Elastic Block Store can be moved to a different instance to resume service.
01:43EBS is typically used for databases in file systems.
01:47Finally, the AWS Storage Gateway is a service that connects local file servers, such as
01:53a Network Attached Storage, Direct Attached Storage or Storage Area Network
01:56to store encrypted files using the S3 service.
02:00This provides a secured mechanism for scalable off-site storage and backups.
02:05Next, let's check out the Database Category.
02:09The first service, Relational Database Service is a scalable database featuring automatic
02:14patches and backups.
02:16It can be used as compatible replacement for relational databases like MySQL, Oracle or
02:21Microsoft SQL Server.
02:23If there's no need for a relational database, AWS has the DynamoDB service, a scalable NoSQL
02:30solution that Amazon claims is the fastest-growing new service in AWS history.
02:35NoSQL is not a relational database format, which means it doesn't support table joins.
02:41The advantage is that it's very, very fast distributed and highly redundant.
02:46Use cases for a DynamoDB include user messages and image metadata.
02:51Similar to DynamoDB, the Amazon SimpleDB service is also a managed NoSQL service.
02:57But it's scaled back and more appropriate for smaller datasets.
03:01I'll use simple DB as the database for the watermarking application.
03:05The last service listed by Amazon in the Database category is Amazon ElastiCache which provides
03:10scalable in-memory caching, as opposed to disk-based caching.
03:14It's protocol-compliant with memcached, meaning, it can be basically dropped in place.
03:19One would use Amazon ElastiCache for things like caching database lookups.
03:25The final category in the Foundation Services is Networking, which provides a number of
03:29low-level utilities.
03:31The first, Amazon Virtual Private Cloud, or VPC, allows AWS services to be launched in
03:37a virtual network, similar to managing a private data center.
03:41The VPC supports both private and public subnets, which is useful for setting up services private
03:46to an organization or just protecting a public web server.
03:49A hardware VPN service is also available.
03:53The next service, Elastic Load Balancing, works in conjunction with a monitoring service CloudWatch
03:58to distribute traffic across EC2 server instances based on configurable metrics.
04:04I'll get into CloudWatch shortly.
04:06Elastic Load Balancing can also provide fault tolerance, directing traffic away from failed servers.
04:11I won't get into much greater detail about the following advanced network services in
04:15this course, but it's useful to know that they're there.
04:19Amazon Route 53 is a scalable Domain Name System web service, providing control
04:24of domains and subdomains.
04:26Route 53 is heavily distributed across the global infrastructure network to maximize
04:30geographical proximity to end users and lower latency.
04:34Finally, Amazon Direct Connect, which provides the opportunity to get a dedicated network
04:39connection to AWS within some datacenters, which is useful for moving very large amounts
04:44of data very quickly.
04:46That's the end of the foundational services.
04:47I understand that that's a lot to absorb in one sitting,
04:50so I will be discussing in demonstrating a number of them individually in upcoming chapters.
Collapse this transcript
Reviewing the components within application platform services
00:00The middle tier contains the Application Platform Services which perform specific functions
00:05as part of a greater application.
00:07The first category is Content Distribution, which has just one service, Amazon CloudFront.
00:13Amazon CloudFront is a Content Delivery Network for files of any size ranging from tiny
00:18files, like stylesheets and images, to large files like installers or large media, like movies or audio.
00:24Unlike S3, CloudFront serves content from geographically distributed edge locations
00:28to deliver content from locations closer to users which increases performance.
00:34CloudFront supports both origin-pull and push mechanisms, meaning it can serve content found
00:39on an existing web server or files can be uploaded to it to serve.
00:43CloudFront supports both static unchanging content like logo images and dynamic content
00:48driven by a database that changes on regular intervals, like news or a blog.
00:53The next category, Messaging contains three acronym filled services.
00:58The Amazon Simple Notification Service or SNS can be used to push messages via a number
01:04of protocols, including HTTP, email and SMS.
01:08However, the service requires verification from the recipient.
01:12So it's good for event-driven internal notifications like warnings, service notifications and status updates.
01:18Due to the verification step, it's more cumbersome to use to send messages to servers.
01:23So there is another service that is tailored for that.
01:26I'm going to use SNS to send admin alerts in the watermarking application.
01:31The Amazon Simple Queue Service, or SQS, provides a mechanism for automating workflow messages
01:37between computers.
01:38These messages are simple and small.
01:41There's a maximum for each message of 64 KB.
01:44The messages are sent, received, and deleted in batches of 10.
01:48I'm going to use SQS to manage the image watermarking workflow.
01:52Neither SQS or SNS is good for messaging groups of users which brings me to the last messaging service.
01:59The Amazon Simple Email Service, or SES, provides bulk transactional emails such as notifications
02:06to users upon events.
02:07It can also be used for newsletters and other sorts of large mailings.
02:12SES also provides support for DomainKeys Identified Mail in association with the domain
02:17to improve deliverability and to fight spam.
02:20Email can be sent via SMTP for an easy replacement solution or using an application programming
02:26interface or API.
02:28The third category, Search, has one service, Amazon CloudSearch, which depending on the
02:34graph that you look at isn't available, but it is.
02:38Amazon CloudSearch is a stand-alone autoscaling fully managed search platform.
02:42Boasting near real-time indexing, CloudSearch is intended to be an easier to implement,
02:47maintain, and scale solution than stand -alone solutions like Apache Solar.
02:52Need to do parallel processing? The Distributed Computing category has a couple of powerful
02:57solutions which I'll cover in a high level now.
02:59But in general, there are more advanced topics that I won't get into in this course.
03:03Amazon Elastic MapReduce, or EMR, is a hosted Apache Hadoop open source framework for data
03:09intensive application for clustered hardware running on EC2 NS3. While the Name MapReduce
03:16can evoke geospatial analysis in actual mapping,
03:19it's really something completely different.
03:21In short, Elastic refers to the ability to scale up and down, and mapping breaks data
03:26into smaller chunks.
03:28The chunks are processed in parallel then recombined, or reduced, into a final product
03:34that can be downloaded.
03:37Elastic MapReduce is designed for working with huge datasets spanning gigabytes or terabytes.
03:42AWS also provides options for Workflow Management as well, starting with the Amazon Simple Workflow Service or SWF.
03:50Despite the name, it's not that simple.
03:53Acting as a coordination hub for applications, SWF maintains an application state between
03:58its pieces and components.
04:00Each workflow execution is tracked and the progress is logged and tasks are assigned
04:05to dispatch to a particular host.
04:07SWF is useful for complex applications with multiple steps in a distributed workflow.
04:13The final components of the Application Platform Services are the Libraries & Software Development Kits.
04:19These interact with Amazon Web Services, but aren't services themselves.
04:23AWS provides Libraries & SDKs including Java, PHP, Python, Ruby and .NET.
04:31While this is not a programming course, I will be demonstrating small amounts of code.
04:36I've selected PHP for examples, because of its broad user database and readability,
04:40and I'll provide all the code in order to focus on Amazon Web Services.
Collapse this transcript
Exploring tools for management and administration
00:00The highest tier, Management & Administration, is less about services and more about controlling services.
00:07The first category, Web Interface contains just one item, the Management Console.
00:13The Management Console is the primary consolidated web interface for managing AWS services.
00:19I'll be using this interface extensively throughout this course.
00:23In addition to the Web Interface, there's also a native mobile application called AWS
00:27Console for Android and a series of stand-alone Commands Line Tools that provide alternative
00:31and sometimes more direct management than the Web Interface.
00:35The next category, Identity & Access contains a hybrid of services and features.
00:40I won't be demonstrating items in this category as Identity Management is a topic onto itself
00:45and Billing is self-explanatory.
00:47I'll start with the AWS Identity and Access Management, or IAM, which I have to admit as a clever name.
00:55IAM provides an identity management in access control system for managing users and groups.
01:00Using Identities, access controls through permissions is given and denied for accessing AWS resources.
01:07Identity Federation is actually a subset of IAM allowing identities from outside resources,
01:12such as a corporate directory to be used to control access without the need to duplicate identities.
01:18Consolidated Billing is less of a service and more like a feature.
01:21It allows multiple AWS accounts to be billed to a central account.
01:25This is useful in organizations who have multiple individuals or departments with their own accounts.
01:30Deployment & Automation provides mechanisms for managing and scaling groups of services in bulk.
01:36In particular, the AWS Elastic Beanstalk is a scaling Platform as a Service that uses AWS services.
01:43Instead of setting up a configuration manually, I can just grab an off-the-shelf solution
01:47stack, and Beanstalk handles provisioning, load balancing, autoscaling and health monitoring.
01:52Supporting applications written in .NET, PHP, Python, Ruby and other languages, Beanstalk
01:59packages and configures existing solutions neatly while still providing access to tweak settings.
02:04If the Beanstalk solutions aren't custom enough, the AWS CloudFormation system allows custom
02:09templates of AWS service configurations for custom solution stacks using a scriptable interface.
02:15The final management component is Monitoring using Amazon CloudWatch.
02:21CloudWatch is one of the few systems that doesn't require configuring out of the box,
02:24as it monitors AWS resources automatically, including utilization, performance and operational health.
02:32CloudWatch is not limited to AWS resources.
02:34In particular, custom metrics can be measured and reacted to, using Put API requests.
02:40Individual threshold alarms can be set based on particular metrics that can send a notification
02:44when something goes particularly badly.
02:48With all the recorded metrics, visual representation with graphs and statistics is included to
02:52allow humans like myself to visualize the data.
02:55Finally, Auto Scaling leverages the Cloud Watch monitoring to allow controlled scaling
03:00of EC2 instances based on whatever conditions are desired.
Collapse this transcript
Exploring options for sourcing applications and AWS history
00:00Up to this point, all the services and systems in AWS have been created by Amazon with the
00:05exception of some the open-source and proprietary server software on the back end.
00:09These combine to provide a mechanism for hosting and deploying custom applications.
00:13The top-tier of Your Applications actually has a double meaning beyond applications
00:17that you write and deploy.
00:19Offering a wide variety of commercially packaged software, the AWS marketplace gives a variety
00:24of package solutions ranging from free to thousands of dollars a year, but often pennies per hour.
00:30The software is delivered in two formats.
00:32The first, our Amazon Machine Images for immediate deployment with billing to Amazon Web
00:36Services, and then Software as a Service where the seller performs the deployment and hosting
00:42of the software, and then bills you and collects payment directly.
00:46I'm not going to demonstrate any software from the AWS marketplace, but it's good to
00:50know that it's an available option.
00:53This survey of Amazon Web Services touched on the majority of their services and systems,
00:57but there wasn't always such a wide array of options.
01:00To give some perspective on where Amazon Web Services started and how they've evolved,
01:04I've put together a simple timeline of the selection of the major events and service launches.
01:09Amazon Web Services launched in 2002 with the initial free version allowing third-party
01:14sites to search and display items from Amazon.com and to put items into shopping carts.
01:19Not a lot of functionality, but it served need.
01:22In October 2004, the services expanded with Alexa Web Information Service for web calling
01:27information and expanded on the Amazon.com integration, including product information,
01:33images and reviews.
01:35March of 2006 brought Amazon Simple Storage Service, also knows as S3, which leverages the
01:41same infrastructure used by Amazon.com today.
01:44In August, Amazon launched Elastic Compute Cloud, or EC2, their virtual machine rental service.
01:50In December 2008, Amazon released SimpleDB, a distributed and redundant NoSQL database system.
01:58Expanding on that offering in October of 2009, Amazon released Relational Database Service
02:03as a replacement for MySQL databases.
02:06In January of 2011, the Amazon Simple Email Service was released facilitating bulk emailing.
02:12What was the cumulative impact of all this?
02:14In April 2012, a report by DeepField showed that, 1/3 of all Internet traffic accesses
02:20atleast one facet of Amazon.com's web services.
02:24That is hugely significant.
02:26Over its history, Amazon has expanded the suite of services available, not only to promote
02:30its own brands, but to provide tools that developers and system administrators can leverage
02:35to build their own applications, independent of Amazon.com storefronts.
02:40As AWS expanded its services, the supporting infrastructure grew accordingly, which has
02:44impasse on how users can deploy services.
Collapse this transcript
Exploring physical locations within the global infrastructure
00:00Amazon Web Services has its own naming system that it uses to describe its services and
00:05the locations are no exception.
00:07I'm going to go over a few key terms and give context about how they work together.
00:12These terms primarily apply in the context of Amazon Elastic Compute Cloud servers that
00:16are not exclusive to EC2.
00:18I'll start with the smallest component then move out.
00:21The first term is instance.
00:24An instance is a single EC2 server.
00:26Remember, EC2 servers are virtual private servers, which means several EC2 instances
00:31may reside on the same physical server.
00:35Instances reside in availability zones, sometimes abbreviated as AZ.
00:40Availability zones are in distinct physical locations which may have multiple datacenters.
00:45Two different users within the same availability zone may actually be using resources across
00:49multiple datacenters.
00:51With that said, the important thing is that the availability zone contains instances and
00:56that they should be treated as the same location.
00:59Moving back a little bit, a region is a large distinct geographic area.
01:05Currently, there are 9 regions available across most continents.
01:09Each region contains several availability zones which are tightly networked together,
01:14but still engineered to isolate failure from one another.
01:17The advantage of this is speed and interconnectivity, but the disadvantage is that despite their
01:21isolation, an event in an availability zone can affect other availability zones in the region.
01:27To mitigate this, regions are isolated from one another to a certain extent.
01:32Regions are still connected to each other, but not as directly.
01:35This provides fault tolerance, improved stability, and serves to contain issues within a region.
01:41When selecting a region, one of the primary factors is proximity.
01:44It is best to choose a region that is closer to both you and your users.
01:48Now any user can use any region, but the further away it is, the greater number of connections
01:53hops and so forth, the data has to travel across, and each step introduces some delay.
01:59The less work it takes to send data back and forth, results in greater speed.
02:03Like talking to someone in a room, it's easier if they are next to you.
02:07Additionally, sometimes there are regional requirements, such as in Europe, where there're
02:11regulations about where servers can be physically located.
02:14While Amazon Web Services has a very good track record for stability and availability,
02:18problems do occur for a multitude of reasons.
02:22Over the past couple of years, a number of high-profile events have occurred that affected entire regions.
02:27The following are exceptions, but I'm mentioning them in the context of why region separation
02:31and location is important, as these events affected single regions, but not the entire network.
02:37In April 2011, a malfunctioning of Elastic Block Store in a single availability zone
02:42ended up affecting the entire region, including a Relational Database Service.
02:47In June of 2012, a major electrical storm affected data centers in the Eastern US knocking
02:52out almost the entire region.
02:54In October, a bug caused Elastic Block Store to get stuck becoming unable to process requests,
02:59which had a rippling effect across multiple servers.
03:02Then in December, a data state error caused an Elastic Load Balancing Service event.
03:07In each of these cases, the issues impacted large numbers of customers across multiple
03:12availability zones, but we're still contained within a particular region.
03:16While these events are isolated, an AWS has been architected for high-availability, problems do happen.
03:22So what should be done to mitigate them while dealing with multiple instances? Well, keeping
03:26all instances in a single availability zone will result in a significant impact of that
03:32availability zone failed.
03:33Instead, by diversifying the locations of instances across multiple availability zones,
03:38the system will become more fault-tolerant.
03:41On the other hand, the configuration would potentially more complex, introducing additional
03:46latency or slowness when communicating between availability zones that wasn't perceptible
03:50within a single AZ.
03:52Additionally, some services are unavailable across availability zones.
03:57Research the capabilities of the services and also how much work you want to do before
04:01determining how best to scale up.
04:04In larger architectures, it's possible to create high-availability systems that can
04:07operate across regions.
04:10There are a number of things to consider in these circumstances, including synchronization
04:14which isn't always possible across regions, including the Relational Database Service.
04:19So other database solutions like MySQL Replication should be used instead.
04:22Where the greater geographic distance comes, higher latency or slowness comes as well.
04:28Each region is geographically distinct, so be aware of the regional data regulations
04:32as it may be legal to store some kinds of customer data in one region, but not another.
04:37These are large-scale architectural issues that have known and working solutions.
04:41So keep these considerations in mind if the need to scale like this arises.
04:46To review some key points about location, keeping close proximity improves speed.
04:51Diversifying multiple instances across multiple availability zones reduces risk overall.
04:57Mistakes do happen.
04:58Be it a bug, operator error, or someone digging through a fiber backbone.
05:02No application and system is perfect.
05:05Natural events can and will occur that will impact infrastructure in unforeseen ways.
05:10Therefore, even though it might seem impossible, always plan on a black swan event where there
05:15are major surprise events that are obvious in hindsight.
05:18For example, should you keep backups in the same availability zone? It seems ironic to
05:23plan for something that can't be seen, but there are enough possibilities that can be
05:26anticipated safely.
05:28Throughout this chapter, I've been providing the context that you'll need to understand
05:31Amazon Web Services.
05:33First, I discussed what cloud computing was, why it is desirable, and how it can be used.
05:38Then, I described at a high level, what Amazon Web Services really is.
05:43I explored the foundation services, including Elastic Compute Cloud and database options.
05:48Next, I surveyed components within application platform services, such as the messaging and
05:53workflow solutions.
05:55No system would be complete without tools for managing and administering the services,
05:59including Identity Management and the Management Console.
06:02I introduced options for commercially sourced applications, then reviewed the history of
06:06Amazon Web Services to give perspective on how it has evolved.
06:09And finally, I described some of the key terms and relations of the AWS Global Infrastructure.
06:16With this context of both what comprises Amazon Web Services and how it's structured, I can
06:20now sign up and start building the foundation for the watermarking application.
Collapse this transcript
2. Instantiating and Configuring an EC2 Server
Signing up for AWS
00:00Now that you have a high-level overview of Amazon Web Services, it's time to put it to good use.
00:05In this chapter, I'm going to walk through signing up for Amazon Web Services, then explore
00:10how to launch and manage an Amazon Elastic Computer Compute Cloud Server which will host
00:14the watermarking application.
00:16I'll demonstrate how to remotely connect the EC2 server, then how to setup Amazon Linux,
00:21including Apache installation.
00:24To use Amazon Web Services, you'll need to sign up for an account.
00:27This costs no money, but does require valid credit card for authorization and takes a
00:31couple of minutes.
00:33Navigate to aws.amazon.com.
00:37If you already have an account, click on My Account/Console and go to AWS Management Console
00:42and skip to the next segment.
00:44Otherwise, click on the Sign Up button.
00:48Sign in with your existing Amazon.com account if you have one or create a new one if necessary.
00:59Next, payment information needs to be provided.
01:02There is no fee to sign up and you won't be billed unless you use non-free services, but
01:06they want to make sure there's a card on file in case you do anything that has a charge.
01:11Select your credit card type, enter the card name, number and expiration date.
01:17Specify your billing address and click continue.
01:20If you submit an address that the system doesn't exactly recognize, it'll prompt you to use
01:25a suggested address in place.
01:29When you're ready, click Continue.
01:33The third step is identity verification. An automated system will call you and ask you
01:37to enter or speak a four digit pin number.
01:40Enter your phone number then click Call Me Now.
01:46The phone call came in seconds.
01:47Follow the brief instructions and speak or type the four digit pin.
01:52When complete, the page updates, then click Continue.
01:58The final step is confirmation, where they test an authorization of $1, which is not a charge.
02:04Amazon will send an email when confirmed.
02:08I received the email in one or two minutes.
02:10When ready, navigate back to aws.amazon.com.
02:15In the upper right-hand corner, go to My Account/Console and click AWS Management Console.
Collapse this transcript
Key Amazon EC2 terminology
00:00At this point you probably can't wait to get started using Amazon Web Services and I agree.
00:06So far it's all been theoretical, but necessary.
00:09I'm going to instantiate an EC2 server, but before I do I'm going to cut through some of
00:13the marketing speak, so the terms and concepts make sense.
00:17Earlier, I described an instance as a single EC2 server.
00:21However, it's not quite as simple as that, as different instances have different capabilities,
00:26which I'll get into in a moment.
00:29There is no charge to create or destroy an instance, which is great for experimentation.
00:34Amazon charges by the hour with no partial hours.
00:38New customers are eligible for free tier for a year on the micro, the smallest type of instance.
00:44Instant types have predictable computing capacity and features, such as number of cores, disk
00:49size, and so forth.
00:52Examples of instance types include micro and high memory extra large instances.
00:57The instance types are further grouped into families with high-level names like standard and high CPU.
01:04The standard group has so many instance types that Amazon has broken out into generations,
01:09with first-generation containing low cost with good performance solutions, and second
01:13generation with higher performance and cost.
01:17Now there are a lot of marketing labeling and comparing the different instance sites
01:20can be a bit awkward.
01:22To compensate, Amazon uses an arbitrary measurement known as a compute unit, as a standardized
01:27measurement of CPU capacity.
01:30While they haven't released their exact mechanism inspects, they claim it's been normalized
01:34across a variety of comparable hardware in order to make it meaningful.
01:39One compute unit is equivalent to a 1.0-1.2 GHz 2007 Opteron or Xeon processor.
01:46The nice thing about this measurement is that it simplifies decisions. A bigger compute
01:51unit is faster.
01:52With this context, I'm ready to launch an EC2 server instance.
Collapse this transcript
Launching an EC2 instance
00:00If you're not at the AWS Management Console, navigate to console.aws.amazon.com.
00:08As previously mentioned, the Management Console provides a web interface to manage Amazon Web Services.
00:13Supporting browsers on the Desktop, Tablet, and Mobile, this is the heart of operations.
00:18I'm going to start by launching an EC2 instance to host my application by clicking on EC2.
00:24By default, the EC2 dashboard starts in the US East region as indicated in both the Service
00:29Health in the upper right- hand corner of the toolbar.
00:33The circular arrow allows me to refresh status. I'm going to create a virtual server by clicking
00:39on Launch Instance.
00:43This opens a window giving me three options.
00:46The first, Classic Wizard gives me finite control over how the instance should be configured.
00:51I'm going to demonstrate the Classic Wizard in order to provide a thorough explanation of what and why.
00:57The second option, Quick Launch Wizard, simplifies the amount of upfront configuration.
01:02As it become more acclimated to AWS, this maybe a more viable option.
01:06And the final option, the AWS Marketplace, provides a number of out-of-the-box commercial solutions.
01:12For now, select the Classic Wizard and click Continue.
01:16The first step of the process allows us the selection of Amazon Machine Images or AMI.
01:21These provide the base operating system and server software already configured and will
01:25be the basis of the server.
01:27There are four tabs; Quick Start which provides several dozen disk images directly from Amazon,
01:34My AMIs contained images that I've created. Right now I don't have any.
01:39Community AMIs contains contributed disk images; use them at your own risk.
01:44This interface usually takes a little bit of time to load and doesn't offer a lot of
01:48detail about the images.
01:51I can filter the Community AMIs by searching for keyword like Drupal, so I'll just type
01:57Drupal, press Enter.
02:00The images can also be filtered at a high level for things like Amazon provided images,
02:0532-bit images, and so forth.
02:08The final tab is the AWS Marketplace again. Going back to Quick Start, I'm going to use
02:14the default selection, which is the 64-bit Amazon Linux distribution, Amazon Linux AMI.
02:21It's preinstalled with an AWS API tools, and it's lightweight and is both supported and maintained.
02:27Also, this image supports the free tier, which is great for experimentation. Click Select to continue.
02:34The next step, instance details provides instance configuration. I can choose the number of
02:40instances, which I'll keep at 1.
02:41I also have an option to select an instance type, which I discussed earlier.
02:46For now I'm going to stick with the free tier; the micro instance. This is fine for experimentation.
02:53There's also an option for Elastic Block Store Optimized instances, which does a lot of the
02:56heavy lifting upfront in terms of configuration. It's not supported for every instance type,
03:01which also excludes micro.
03:04There are two options about how the instances can be launched.
03:07The default option is the regular hourly charges with no commitments.
03:12The second option, request spot instances, allows instances to be created and built at
03:16a market rate based on supply and demand.
03:19This can save money while performing large computational tasks.
03:22However, as I'm just hosting a simple application, I won't need this as it is a bit overkill,
03:27so I'll switch back to launch instances.
03:30In this box I have two options; the first EC2 is the default, which is a public facing
03:35server with no special networking configuration.
03:38I can choose an Availability Zone manually or just take whatever it gives me. This is
03:44useful if I'd like to distribute instances across availability zones.
03:48The second option, VPC, stands for Virtual Private Cloud.
03:53It shows regardless of whether I've configured a private or public subnet.
03:56I haven't configured anything so at this time I can't click on it. I have no availability zone
04:02preference, so I'll just click Continue.
04:05The Advanced Instance Options gives me an opportunity to tweak the configuration.
04:10I can specify a particular Kernel and RAM Disk providing very granular control over security
04:15fixes and updates and tuning for specialty applications. This can safely be left as default.
04:22Monitoring is an up-sell. CloudWatch monitoring is available by default, but detailed monitoring
04:27can be added at additional charge.
04:30User Data refers to scripts that are executed as root user on the first boot.
04:35Termination Protection prevents destruction of the instance from the Console or API.
04:39Shutdown Behavior allows the instance to be stopped, the equivalent of turning it off,
04:43and terminated, which is when the instance is completely removed.
04:47Finally, the IAM Role that can be assigned to the instance. I haven't created any, so
04:51there's no need to change it.
04:52In fact, I don't need to make any changes, so just click Continue.
04:57Storage Device Configuration allows the addition of volumes, such as Elastic Block Stored volumes, and so forth.
05:03If I click Edit, I can change the size of the root volume, toggle whether the volume
05:08is deleted upon termination, and so forth. Clicking on EBS Volumes I can either create
05:14or map a public EBS volume.
05:17Snapshot provides a gigantic list of public snapshots, which frankly is not very usable.
05:24The Volume Size, Device, and so forth can be specified as well.
05:28For now, the default configuration is fine and no changes are needed, so just click Continue.
05:33This next window allows me to add metadata to tag the instance. By default there is a
05:38name that can be associated with the instance, and I can add up to 10 unique keys with optional
05:43values. I'll tag it with a Name, Watermark and Continue. So I'll click here and just
05:48type watermark, and click Continue.
05:53The public and private key pairs provide secured communication to both Windows and Linux servers.
05:58As this is a Linux server, the provider key pair will allow me to SSH into the instance.
06:03I don't have to create a key pair every time I create an instance, as they can be reused,
06:08but since this is the first time, it's required. I'll name it ec2private, then click Create
06:16& Download your Key Pair, which is going to transfer a file named ec2private.pem.
06:24The next step is to configure the firewall.
06:25A security group is just a name for set a firewall rules. Let's create a new security
06:31group based on the role of the server, which is just regular web server.
06:34I'm not going to configure it; SSL says to keep it simple.
06:38I'll name the group Web server (no SSL) and describe it as HTTP and SSH.
06:47There is an existing rule for SSH already written in the classless inter-domain routing
06:53syntax, also know as CIDR.
06:56This rule allows traffic to and from any IP with no subnet mask.
07:00I'm going to create a new rule for HTTP, under Create a new rule, select HTTP.
07:07I have the option to limit traffic with this CIDR syntax, as this is web traffic however,
07:12I'm just going to let anybody access it, click Add Rule.
07:16The TCP list is now updated with the rule on port 80; I can now both connect the instance
07:21for remote administration and for accessing content served on port 80, like web pages, click Continue.
07:29The final step, Review, allows me to review all the preferences that I have set.
07:33Notice that Monitoring says Disabled.
07:37This actually means advanced monitoring; remember it's an up-sell. Scroll down to review everything.
07:43Looks good to me, so I'm going to click Launch to create this instance.
07:49As soon as I clicked Launch the usage hours started counting.
07:52The instance takes a couple minutes to launch.
07:54The pop-up describes a couple of things I can do in the meantime, such as creating additional
07:58status check alarms for additional cost, or creating EBS volumes, also for an additional charge.
08:04For now, just close the window.
Collapse this transcript
Managing EC2 instances from the console
00:00Let's take a look at the newly created instance.
00:03Click on the Instances link on the left-hand menu.
00:06The Instances page within the EC2 dashboard shows that the instance named watermark type
00:11t1.micro is currently running.
00:13If I click on the row, more comprehensive information will be shown at the bottom.
00:19By default, it's moved to the bottom of the screen, but there are three icons on the right
00:23that allow me to resize the instance information.
00:25I'll click the one that's farthest to the right.
00:29The description is verbose and confirms that the instance has been created with the configuration I've specified.
00:34Let's click the tab labeled Status Checks.
00:37There are two basic status checks: System reachability and Instance reachability.
00:42System reachability checks the AWS infrastructure and Instance reachability checks to see if
00:46the instances operating system is accepting traffic.
00:49I have the option to add an alarm, but I won't do that now.
00:52Let's click on Monitoring.
00:54By default, CloudWatch Basic measures 10 default items, once every five minutes;
01:00average CPU Utilization, Disk Reads, and so forth.
01:04Times are displayed in Universal Time coordinated.
01:07I also have the option to set a different Time Range.
01:10The default is to show activity within the last hour.
01:14The final button, Tags, shows the Key Name with a Value watermark that I set during the wizard.
01:20I'm going to click on the row with the watermark instance, nothing happens.
01:26Just below the toolbar, there are two buttons, Launch Instance and Actions.
01:31Launch Instance will bring up the wizard again.
01:34Next to it, Actions allows me to perform actions onto selected instances.
01:38Make sure that the row with watermark is checked and click Actions.
01:43Over a dozen Actions will be shown in three groups: Instance Management, Actions, and CloudWatch Monitoring.
01:50The first group, Instance Management provides configuration, information, and other options.
01:55The first action within Instance Management is Connect which will provide instructions
01:59on how to remotely connect the instance within SSH client, and a link to a browser-based Java SSH client.
02:06I'll demonstrate how to connect shortly.
02:08The next action is get system log which displays the boot messages.
02:13This is useful if the instance isn't booting and additional context is needed for troubleshooting.
02:18Create Image allows the manual creation of a duplicatable disk image of the instance.
02:22The disk image will be stored in the Elastic Lock Store in the proprietary Amazon machine image format.
02:28Remember that EBS Storage comes at additional cost before you start experimenting.
02:33Images created this way make a good quick and dirty backup as images in EBS can't be
02:37directly downloaded.
02:39They can be moved to S3 and downloaded though.
02:42Add and Edit tags are the same key value pairs that are found in the wizard earlier such as name.
02:48Launch more like this opens the launch instance wizard with the same options for the current
02:51instance which is useful for making a similar but, not exact copy of the instance.
02:56Change termination projection, as seen in the wizard, prevents the API and console over
03:01terminating instances.
03:02This is good for mission critical and always on instances.
03:06View/change user data provides an opportunity to change the user data.
03:10The same stuff that was in the wizard, where you can set the values and/or a start up script.
03:14And finally, Change shutdown behavior.
03:17There are two options, Stop which is like turning the server off and Terminate which
03:22is destructive and will delete the instance.
03:25The next group of Actions, generically labeled as Actions, is pretty straightforward.
03:29Terminate, deletes an instance.
03:32Reboot, sends a reboot command to reboot the virtual server.
03:36And Stop/start do about what you expect.
03:38Remember, if Stop behavior is terminate, the instance will be deleted when stopped.
03:43The final group of Actions, CloudWatch Monitoring actions, are basically shortcuts.
03:48I can enable and disable detailed monitoring directly from this interface.
03:53Detailed monitoring adds seven metrics at a 1 minute frequency from a couple of bucks a month.
03:58The metrics include CPU utilization, network in/out, disk read/write in bytes, and disk
04:03read/write in ops.
04:05Finally, Add and Edit alarms which will send notifications when metrics hit levels that
04:10you've defined.
04:11I'll describe the notification in a bit more detail later.
Collapse this transcript
Remotely connecting to an EC2 instance
00:00Now I'll remotely connect to the instance.
00:03Make sure the watermark instance is highlighted.
00:05Click Actions and go to Connect.
00:07There are two ways to connect.
00:10The first is with an SSH client which I recommend.
00:14This is available across all platforms for Mac and Linux and SSH client should already
00:19be installed and available from the terminal.
00:21I'll demonstrate this first.
00:24For Windows, the free PuTTY client is recommended and I'll demonstrate that after we connect
00:28from Mac and Linux.
00:30If a terminal client is unavailable then the browser-based Java SSH client can be used.
00:35But I don't recommend it due to compatibility issues and will not demonstrate it.
00:40Click on Connect with a standalone SSH Client for the instructions.
00:44I'll demonstrate first with the Mac terminal which will be very similar to the Linux terminal.
00:48I'm going to do a one-time configuration that will make it easier to remotely connect to AWS servers.
00:54Open up a terminal, verify that .ssh configuration directory exists by making the directory using -P.
01:02This won't harm anything if it's ready there. mkdir -p ~/.ssh.
01:11Next, I'm going to move the downloaded private key file to the .ssh folder.
01:16On the Mac, it downloaded to the Downloads directory in my Home folder. It may be a slightly
01:20different path for you.
01:22So mv ~/Downloads/ and then the name of the file which is ec2private.pem ~/.ssh.
01:36Change the permissions on the file to only allow yourself to read the file, chmod 400
01:44~/.ssh and then the name of the file, ec2private.pem.
01:49Finally, edit the SSH configuration.
01:52I'm going to use the Nano editor but, there's no requirement for you to use it.
01:57Nano -w ~/.ssh/config. I'm going to insert a line that'll say any Amazon AWS server.
02:06So Host *amazonaws.com.
02:11Then, I'll specify the identity file path to the private key file.
02:16Space, space, IdentityFile ~/.ssh/ec2private.pem and then finally on a new line, specify the default user.
02:31So user will be similar to the instructions shown on the Amazon page, ec2-user.
02:39Press Ctrl+X to exit and Ctrl+Y to save.
02:42Press Enter and now we're back in the terminal.
02:45Now you can SSH to the server without having to specify the location of the private key file.
02:50Take a look at the public DNS part of the connect instructions.
02:54This is the hostname of the server and will be different than what is shown on my screen.
02:58So I'm going to select and copy that and switch back to the terminal.
03:05Type ssh and the hostname of the server which I'm just going to paste, press Enter.
03:14As this is the first time connecting to the host, I'll see a one time message asking if
03:17I want to connect to the unknown server.
03:20Type yes and press Enter.
03:24This will add the server to known hosts and I'm connected.
03:27If you're connected as well, skip to the next segment.
03:32For Windows users, the configuration is going to be different.
03:35If you don't already have a full PuTTY suite of software, go to the Home page and select
03:40the PuTTY installer.
03:43PuTTY doesn't support the key file automatically but, it can import it which I'll demonstrate.
03:49Start PuTTYgen, go to All Programs > PuTTY > PuTTYgen, then click Load. Where it says
03:58PuTTY Private Key Files switch to All Files.
04:02Then navigate to where you downloaded the .pem file and click Open.
04:06So I put it in Downloads.
04:08You'll get a notice indicating success and click OK.
04:13Click Save private key and say Yes the pass phrase question.
04:18Make sure that the filename is same as the key pair which was ec2private and click Save.
04:26Now that the key is being converted, I can now configure PuTTY to connect.
04:31Take a look at the public DNS part of the connect instructions.
04:33This is the Host Name of the server and will be different than what shown on my screen.
04:37So I'm going to copy that then open PuTTY.
04:42First, paste the Host Name then on the left under Category, click Connection > SSH > Auth.
04:54At the bottom for private key file for authentication, click Browse, go to the Downloads directory
05:01where we have the private key.
05:03Then click on Session, make sure that Host Name is in there, and then we're going to
05:11actually make a small change to that.
05:14So move to the beginning of that line and type a username which will be ec2-user@ and
05:24finally under Saved Sessions, type watermark and click Save.
05:30Double-click on watermark, it will ask you a question because this is the first time
05:35that you've connected to the host, say Yes.
05:40We've now remotely connected from Windows.
Collapse this transcript
Setting up Amazon Linux and Apache web server
00:00Now that I've remotely connected to the EC2 server, I'm going to set up Amazon Linux.
00:05Amazon Linux uses Yellowdog Updater, Modified, known as YUM, for package management.
00:10This is the same RPM compatible system that is used by Red Hat enterprise Linux, Fedora, and CentOS.
00:17Then I'm going to demonstrate how to update the server.
00:19Finally, I'll install Apache, the open-source Web server, along with PHP for interpreting
00:24and other required libraries.
00:27First, I'm going to update the server using the YUM Package Manager. This is a simple
00:31one-liner; sudo yum update.
00:37YUM will ask if the updates are okay, say yes.
00:43A summary will be shown at the end.
00:45Next I'm going to install some required packages for minimal web server that has PHP support.
00:50I'm also going to install ImageMagick which is required for the watermarking.
00:55Note as I'm typing this both the spelling and capitalization of ImageMagick. So we're going to type
00:59sudo yum install gcc make httpd php-common php-cli php-pear php-devel git and then ImageMagick-devel.
01:26YUM will display all the dependencies and ask if I want to proceed, say yes.
01:33Next, I'll update the PECL PHP Package Manager. Don't worry about memorizing all of this. This is
01:39just a one-time setup; sudo- pecl channel-update pcl.php.net.
01:48In order to allow the PHP Package Managers to make changes to the PHP Configuration,
01:53I'll need to specify the location of set configuration.
01:56So sudo pear config-set php_ini /etc/php.ini.
02:05Now I'm going to do same for PECL; sudo-pecl config-set php_ini /etc/php.ini. Then I'll
02:16install ImageMagick for PHP. This will take a minute or two; sudo pecl install imagick.
02:29Just press Enter for the prefix.
02:33For the purposes of demonstration and troubleshooting, I'm going to configure PHP to display errors to the screen.
02:38In a production environment this would not be appropriate.
02:42I'll start by displaying Xdebug which will display additional debugging information in
02:45case there's a problem; sudo-pecl install xdebug. Then I'll need to make a small change to
02:55the PHP configuration itself to display errors; sudo nano -w /etc/php.ini. First make a configuration
03:06change on the very first line of the file.
03:09Instead of extension equals replace it with Zend_extension equals/user/lib64/php/modules
03:21and then xdebug.so. Press Ctrl and W, and search for error reporting= change it to E_ALL
03:36| (pipe) E_STRICT which is the development value shown above, then look for display_errors, space,
03:48equals, set to off, and I want to change that to On, exit by pressing Ctrl+X, then Y to save.
03:58Now that the dependencies are installed, I can setup the Apache Web server to start automatically,
04:02this is a one-time configuration so don't worry about remembering this; sudo chkconfig
04:09httpd on, then start the service, sudo service httpd start.
04:22Switch back to the web browser, copy the DNS from the connect page, open a new tab, and paste it in.
04:32You should see an Amazon Linux AMI test page, if not, check the firewall settings using the
04:37Management Council.
04:39At this point I've completely prepared the instance to start serving an application.
04:44Throughout this chapter I provided an introduction to the heart of Amazon Web Services, the Amazon
04:49EC2 Cloud Servers.
04:51I started by signing up for Amazon Web Services which was a multi-step process.
04:56Next, I defined Key Amazon EC2 terminology in order to provide a foundation of understanding.
05:03With that knowledge, I launched an EC2 instance with Amazon Linux, describing each step of
05:08the process as I want along. I demonstrated how to manage easy to instances from the Amazon
05:13Web Services Console.
05:15No sense in having a server that you can't manage, so I remotely connected to an EC2
05:19instance via SSH with instructions for Mac, Linux, and Windows users.
05:24Finally, I walked through how to set up Amazon Linux and the Apache Web server.
05:29Prepared with the fully functioning server in the cloud, I can start assembling the watermarking
05:33application using Amazon Web Services.
Collapse this transcript
3. Building a Cloud Application with Platform Services
Configuring the software developer kit with AWS credentials
00:00In this chapter, I'm going to assemble a number of common AWS services needed to build a simple
00:05image watermarking application.
00:08First, I'll need to get specific credentials in order to configure the Software Development
00:11Kit that interacts with AWS.
00:14Then I'll demonstrate how to store objects in Amazon S3.
00:17Database records will be stored in Amazon SimpleDB for persistence.
00:20The state of the application will be managed in Amazon Simple Queue Service, then I'll
00:26send an email to myself upon completion with the Amazon Simple Notification Service.
00:30I'll configure application monitoring with Amazon CloudWatch.
00:34Finally, I'll put it all together and demonstrate how everything works.
00:39Now that the web server is up and running, there is one final component that's needed;
00:43the development tools necessary to communicate with AWS.
00:47As mentioned before, these are in the form of software developer kits in libraries.
00:51I'll demonstrate the Amazon Web Services Software Development Kit for PHP version 1.
00:57Version 2 is out but at the time of this writing it's so new that most services aren't supported.
01:02Check out the SDK documentation for an overview of what's available.
01:05I'll install the SDK using the PEAR package manager.
01:10If you haven't already done so remotely connect to the EC2 server.
01:14First make sure that the list of available software is up-to-date; sudo pear update channels.
01:21Then I'm going to add the official channel for Amazon Web Services; sudo pear channel-discover
01:31pear.amazonwebservices.com.
01:36Finally, install the SDK version 1.5.17. There may be a newer version available, so yours
01:42may be different; sudo pear install aws/sdk-1.5.17.
01:51The final step is to configure the SDK with some credentials, but you're going to have
01:55to go back to the browser, go to the Management Console, click on your name, and go to Security
02:01Credentials. Scrolling down, it's going to show you an access key.
02:06I'm going to need both the Access Key ID and the Secret Access Key. We're going to need
02:10both of those in just a moment.
02:13First copy the Access key ID. I'm going to open up the open up a text document and just
02:17paste them in, then click Show for the Secret Access Key, and paste it in as well.
02:27Now that I have the necessary credentials I can configure the SDK.
02:32Switch back to the Terminal, and then I'm going to need to figure out where the SDK
02:36was installed in order to configure it.
02:38I'll use the pear command to list files, pear list-files aws/sdk.
02:45Based on this list, I can see where the SDK was installed, so I'll change directory to
02:48that folder, cd /usr/share/pear/AWSSDKforPHP/. I'm going to copy the sample configuration
02:59to use it as a base; sudo cp config-sample.inc.php to config.inc.php. Then I'll edit the configuration;
03:12sudo nano -w config.inc.php.
03:19I'm going to the search for 'key' and instead of 'development-key' I'm going to take that
03:26out, replace it with the very first one.
03:33And for this Secret Key, I'll copy and paste that second one, and replace the Secret Key
03:39as well, press Ctrl+X to exit and Y to save.
03:45As this isn't a PHP development course, I'm not going to demonstrate how to program the application.
03:49All the source code has been fully provided both as free exercise files which requires
03:54extracting and copying files between my workstation and the EC2 server, and on a public source
03:59code repository which makes the installation very simple.
04:02I recommend reading the source code even if you don't program in PHP as I walkthrough
04:07each step I've included, documentation regarding how and why I'm doing everything.
04:12In the console, change directory to the temporary files; cd /var/tmp.
04:19Next use git to clone the public repository; git clone git://github.com/lyndadotcom/uarwaws
04:34the Up And Running with Amazon Web services.git.
04:38Now that the files are on this server, I'll need it to move it to the web root and I'll
04:42need elevated privileges for this step; sudo mv uarwaws/* /var/www/html/. Go back to the
04:55browser and reload the test page.
05:00Some warnings will be shown about the missing configurations, which is normal and I'll get
05:04to it in this chapter.
05:06ImageMagick should indicate that it's installed and the name of the instance DNS name and
05:10instance type should also be displayed.
05:13I've configured the SDK with my AWS credentials.
05:17If troubleshooting is needed, check the configuration by editing config.inc.php and verify that
05:23the correct keys are there, both the key and the secret.
05:26Now that I've verified that the server is ready to go, I'm going to look at the next service
05:30that I'm going to use in this application, Amazon S3 for storage.
Collapse this transcript
Storing objects in Amazon S3
00:00Amazon Simple Storage Service, or S3 for short, is a bit more in-depth than the name implies.
00:06I'll go over some of key concepts in order to better understand the storage service.
00:10An S3 object is a computer file, ranging in size from nothing to 5 terabytes.
00:16There's no limit to the number of objects that can be stored.
00:19Objects each have a unique key as an identifier, which is a Unicode string less than 1024 bytes long.
00:25Each object has a value which is the content, what's actually going to be restored.
00:30In addition to a value, objects also have metadata, which I'll describe in just a moment.
00:35Objects can also be optionally versioned, meaning that I can have one unifying record
00:39with different iterations of a file.
00:41This doesn't save money, as each version is treated like an object for billing purposes,
00:45but it does simplify organization.
00:47I'm going to use versioning in the watermarking application.
00:51Each object has metadata associated with it.
00:53This is in every piece of metadata available, but these are the most useful.
00:57Starting off with a Date which is when the object was created in S3, the Last-Modified
01:02Date which is one it was modified in S3, Content- Length, which is the size of the object in bytes,
01:09Content-MD5 which is a base64 checksum of the object value which is encoded as 128-bit
01:16MD5, X Amazon version id which is a version ID that was assigned by Amazon if versioning
01:22is being used, and finally the Amazon storage class.
01:26There are three S3 storage classes for objects which affect price, redundancy, and how quickly
01:30the objects can be retrieved.
01:33Standard is the default with practically complete durability, meaning Amazon is protected against
01:37bad events and has the highest retrieval speed.
01:41The second class, Reduced redundancy storage, is a cheaper solution, but it's not backed
01:45up as often and not always the fastest.
01:48And last but not least, there is Glacier. This is a slow solution and very cheap.
01:53Glacier is referring to slow, but persistent.
01:56This is good for big archives that you can wait for hours for the transfer.
02:00Glacier files are different though as it's treated as a separate service from S3.
02:05Objects need an organizational structure which brings me to S3 buckets, which are top-level
02:10containers for storing objects.
02:12Unlike EC2 servers which go in a particular availability zone, a bucket goes into regions.
02:17The same rule of thumb applies; keep the content close to the users.
02:21S3 buckets have security and access control. They're restricted by default, but can be made public.
02:28Each AWS account can have up to a hundred buckets at any given time.
02:31Buckets can have versioning for objects turned either on or off.
02:35It can be switched on, but can never be turned off.
02:38Buckets cannot be transferred between accounts, but once they're emptied, they can be deleted
02:42and the name is reused.
02:43Why is name reuse important? Bucket names are unique across all of the S3, so if a bucket
02:49name is too generic, there's a good possibility somebody else is using it.
02:53Bucket names need to be at least three characters long and no longer than 63 characters. They
02:58are alphanumeric, meaning A through Z and 0 through 9, and also includes dash and a period.
03:04However they must start and end with a letter and a number.
03:08And additional restriction applies, they cannot be formatted like an IP address.
03:13That's a lot to absorb, so here's an example place.to.put.my-files is okay, but if I start
03:20it with whereisthe$20youoweme doesn't work because it starts with a the period and has
03:24a dollar sign in the middle of it.
03:26With that said, in US regions there are more relaxed naming restrictions. The names can be
03:31up to 255 characters in length.
03:33There is also a larger set of characters that can be used.
03:36Uppercase and underscores are also allowed.
03:40With that said, this can lead to compatibility issues, so just because you can, doesn't mean you should.
03:45The best practice is to use the regular naming convention even if you're using a US region.
03:50With this context, I'm going to create a bucket to put files in for the watermarking application.
03:56Going back to the EC2 Management console, I'm going to click Services in the header,
04:01and then go to Storage and S3.
04:04Next, click Create Bucket.
04:07For the Bucket Name, call it something unique such as username.test.watermark.
04:13I'm going to use jpeck.test.watermark.
04:17The default region is US Standard which covers the entire US.
04:20I have the option to create detailed access logs to a particular bucket.
04:25I've no need to do that, so I'll just click Create.
04:29There are three tabs at the top; None, which just shows the Bucket List, Properties which
04:36allows the adjustment of a number of options and Transfers which is the live list of bucket traffic.
04:42Back to the Properties tag, open Permissions.
04:46By default, as the owner, I have permission to list, upload and delete, view and edit permissions.
04:52However, I want everyone to view the files through the public application, so I'm going
04:56to click Add more permissions.
04:58The select list under Grantee has a number of options. Everyone is most appropriate.
05:04If I was using IM, there would be more granular controls.
05:08I want everybody to be able to view files, but not list, edit or delete.
05:12There is another link here for Add bucket policy.
05:16Bucket policies define access to Amazon S3 resources.
05:18They are written in JSON and offer much more granular control, such as when you're dealing
05:23with multiple accounts.
05:25The last button here is Add CORS Configuration.
05:28CORS Configuration, or Cross Origin Resource Sharing, allows restrictions on domains for
05:33interacting with content.
05:35This is useful if I want to restrict the domains that images can be embedded on for example.
05:40Click Save, then close Permissions and open Static Website Hosting.
05:48S3 allows completely static websites to be hosted, meaning no server-side languages like
05:53PHP or Ruby can be used.
05:56This is great for a lightweight hosting option for a simple website like a brochure that
06:00has client-side interactivity or maybe some embedded forms from other domains, such as a Google form.
06:07Close Static Website Hosting.
06:10Logging is the same option that was shown during creation, so we can just skip that for now.
06:18The next option is Notifications.
06:21Notifications send messages when a Reduced Redundancy Storage Object has been lost.
06:26I'm not covering the Storage Class, so it can be safely ignored.
06:30Closing Notifications and going to Lifecycle.
06:34Lifecycle allows for archiving to Amazon Glacier on an object or sets of objects.
06:39Lifecycle can't be used with versioning.
06:41I'm not covering Glacier, so this can also be safely ignored.
06:46Close Lifecycle and open Tags.
06:49Tags allow for clearer line items for billing which is useful if you are determining what
06:53is costing you money.
06:56Closing Tags and going to Requester Pays.
07:00Requester Pays is kind of what it sounds like, whomever is downloading the file, pays for
07:04the charges of the request and file transfer.
07:07No anonymous access is allowed in this mode because of the billing.
07:11Finally Versioning, which I do want for the watermarking application. Remember turning
07:16Versioning on is a one-way operation for a given bucket. It can't ever be turned off.
07:22Click Enable Versioning.
07:24It'll ask you, are you sure? And you say, yes.
07:27There's a new option here, Enabled and Suspended.
07:31Suspended Versioning means that the version identifier for any adage objects after Versioning
07:35is suspended will be null.
07:37This is similar, but not quite the same as the Bucket without Versioning at all.
07:41But it can be used as a placeholder if you can't delete the bucket.
07:45On the left, click the Bucket Name again.
07:47This will bring up a file view.
07:49Currently there aren't any files here.
07:51Files can be uploaded in the actions, files can also be organized into a folder within
07:56the Bucket, but to keep things simple I'll just ignore it for now.
08:00Now that the Bucket is ready, I'm going to configure the watermarking application.
08:04Switching to the Terminal with a remote connection, reconnecting if necessary, enter the following
08:08command: sudo nano -w/var/www.html/config.inc.php. Look for the definition of the bucket name
08:20and update it with the unique Bucket name created earlier.
08:23I'm going to jpeck.test.watermark.
08:28Your bucket name is going to be different.
08:30Leave the rest of values alone for now.
08:32I'll be configuring it piece -by-piece as I go along.
08:35Press Ctrl+X to Exit and then Y to Save.
08:38Now that there's a place to put them, I'm going to create the mechanism for uploading
08:42images and keeping track of them in a data store.
Collapse this transcript
Using Amazon SimpleDB to store records
00:00The Image Watermarking application has very little needs when it comes to data persistence.
00:05As S3 already keeps track of when a file is created, updated, and what version is on,
00:10there is only four things that I need to store in a database.
00:13The file name of the image, whether the watermark has been applied and the height and width
00:18of the image for proper HTML sizing.
00:21Everything can be stored as flat data, so there is no need for relational database.
00:25Amazon SimpleDB is a non- relational database store.
00:29Records are stored as key value pairs not as tables. The structures are optimized for
00:33retrieving and updating data, which in turn makes it easy to scale.
00:38Non-relational data stores are referred to as NoSQL, also known as not only SQL.
00:43There is a number of reasons why it makes sense for me to use Amazon SimpleDB for this kind of application.
00:49The data is automatically indexed as it's added and the hardware is provisioned, the
00:53data is replicated, and the performance has already been tuned for me.
00:56This means that there's very low management overhead, which allows me to focus on developing
01:00products, and it's fast out of the box which is very nice.
01:03SimpleDB's pricing structure is unique, in that when you exceed the free tier, the charge
01:09is per query based on the amount of work done.
01:12No queries means no charge, so it's good for keeping costs down in a passive or low-volume application.
01:18Amazon SimpleDB like many of the other Amazon services has its own vocabulary.
01:22A domain is a collection of data items with the same structure; each data item has unique
01:28identifier a name which can be thought of as a primary key.
01:33Items can have up to 256 distinct attributes, which can be kind of conceptualized as column
01:39headers in a spreadsheet.
01:41Attributes store values. To get perspective into the relationship, items have attributes
01:46and each attribute can have a value.
01:49Values can be thought of as individual cells in a spreadsheet.
01:52All the values are stored in the same format which is a UTF-8 string which is basically text.
01:58It makes a lot of sense for me to demonstrate the Amazon SimpleDB interface, but there is
02:02kind of a little bit of a problem.
02:04It's not in the AWS Management Console.
02:07There is an official JavaScript scratchpad which allows queries to be run data and data
02:10to be traversed, but it's not supported and as of this writing it is broken in browsers like Chrome.
02:17This doesn't mean that SimpleDB is abandoned. It is still a good service, it just doesn't
02:20come with a user interface.
02:22So I'll use an alternative. There are too good and free ones; SdbNavigator which is an extension
02:27for Chrome is well maintained and performs well.
02:30I'll be demonstrating with that.
02:32If you have are Firefox, SdbTool, which is an extension is also available.
02:37I'm going to switch to a Chrome browser and navigate to https://chrome.google.com/web
02:45store/category/extensions.
02:51I'm going to search for SdbNavigator, click ADD TO CHROME, click Add and it's been added.
03:04A new icon will be added in the upper right. Click on it to open the interface.
03:09Enter in the Access Key and the Secret Key, then select the Region that the EC2 instance
03:23is in, which will be US- East and click Connect.
03:29By default, there are no domains to store data in, so let's create one, click Add domain.
03:35It will ask for a name.
03:36I'm going to use it to store information about watermarked images, so just call it watermarkedimages.
03:44When I select the domain and run the default query it will be empty.
03:47There is only one property, itemName.
03:50This is the default and the primary key for the record.
03:53Additional attributes which are known as properties here can be manually added in this interface.
03:57However, that's kind of a pain.
03:59As an alternative, the API call put attributes to add or update a record can just arbitrarily
04:05create a new attribute on the fly.
04:07This attribute is created for the entire domain, and then the attribute is added to the item
04:11with a specified value.
04:13New attributes are not automatically added to any existing items.
04:17Back in the browser, I'm going to demonstrate this behavior.
04:20Notice that I can't currently add any records until I add an attribute. Click Add property
04:25and name it watermark.
04:29Now I can click Add record.
04:32For the name I'll call it first and for the value of watermark I'll say n, click Update
04:39to save the item.
04:41Click Add property again, and this time call it height.
04:45Notice that for the test item, the height is not set.
04:49Click Add record and we'll call it last and with a height of 100.
04:57This time watermark has no attribute.
05:00Delete both these records by clicking on the check box to next to the item name and clicking
05:04Delete record, say Yes to the confirmation.
05:08I now have a place to store information about individual images so we'll add it to the configuration
05:14of the watermarking application.
05:16Switching to the terminal with a remote connection and reconnecting if necessary enter the following
05:20command; sudo nano -w /var/www/html/config.inc.php. Navigate to the SimpleDB domain and we'll
05:33type watermarkedimages. Press Ctrl+X to exit and then Y to save.
05:40The next thing I'm going to need is a mechanism that can build a queue of images that need watermarking.
Collapse this transcript
Managing workflow with the Simple Queue Service
00:00In a Cloud application, functionality is often distributed between many different components.
00:05Managing communication between these components can be difficult without a centralized mechanism.
00:10In the watermarking application, I've purposely decoupled the act of uploading images and
00:15the act of placing a watermark on the image to simulate multiple servers with individual roles.
00:20To manage the communication between the components, I am going to use the Amazon Simple Queue Service.
00:25The Simple Queue Service, or SQS, is a distributed queue system that stores messages between
00:30decoupled components that don't have or don't need a direct communication mechanism.
00:35There are three distinct roles in this kind of system: Producer, which generates messages,
00:40in the watermarking application this will occur when the file is uploaded; the queue,
00:45which is the simple queue service itself, it's a temporary repository for the messages,
00:49and finally the consumer, that reads and deletes messages.
00:53In the watermarking application, this will be the component that actually places the
00:56watermark on the image.
00:58To illustrate the process, here's the watermarking workflow.
01:01First, a file is uploaded that needs watermarking.
01:06A message is then sent to the queue, then the watermarker gets the message from the
01:12queue, making it invisible to other consumers and starts working on watermarking.
01:18Finally, when processing is complete, the message is deleted and the queue is clear.
01:23There is a number of reasons to use a queue.
01:26Primarily, it's a buffer which helps resolve issues when a producer is producing faster
01:30than the consumer can handle.
01:33Another scenario is when there's intermittent access to a consumer, such as a network disruption,
01:37time limited availability, or a system failure.
01:40In all of these instances, the producer doesn't know or care what the consumer is doing. It
01:44just prepares the messages.
01:46When the consumer is ready, it just takes the next few messages from the queue.
01:50This starts a countdown timer where the messages are invisible to any other consumer.
01:54This prevents multiple consumers from claiming the same message.
01:58Upon task completion, the consumer deletes the message completely, indicating success.
02:02If the consumer is unable to complete the task, the message times out, is available again
02:07for another consumer to process.
02:08There is a 10 message limit to producing and reading messages per call.
02:13I am going to demonstrate how to create a queue which will be used for the watermarking application.
02:18Open the browser and navigate to the Management Console, if you're not already there.
02:22If you are in the menu, go to Services > App Services and SQS.
02:29There are no queues so I am going to create a new one.
02:32The queue name must be a combination of up to 80 letters and numbers, underscores and hyphens.
02:37Let's call the queue images- to-watermark with hyphens.
02:44There's a number of queue attributes, the visibility timeout which is the amount of
02:47time that a message is not visible to other consumers, the message retention period which
02:51deletes unclaimed messages, maximum message size in kilobytes, the delivery delay in seconds
02:57and receive time, which allows consumers to wait before closing the connection if there
03:01aren't any media messages to claim.
03:04No changes are needed, so just click Create Queue.
03:08Once the queue is created, details about the queue configuration are shown at the bottom of the screen.
03:13There's also a tab for permissions which by default only allows the queue owner who is
03:16me, to access it.
03:18I am going to send a message through the queue.
03:21At the top, click Queue Actions and then Send a Message.
03:26For the text, I'll enter a fake file name test.jpg.
03:32Click Send Message.
03:35Notice that there is a unique identifier for the message and an MD5# of the body for consistency checks.
03:41Click Close then click the Refresh button.
03:44There is now a message in the queue.
03:47To read it, click Queue Actions again, then View/Delete Messages.
03:51Click Start Polling for Messages to start reading from the front of the queue.
03:55The message has been shown.
03:57Note that there is a countdown timer at the bottom.
03:59This is the visibility of the message.
04:01I am going to let it time out which will allow the message to be claimed by others.
04:08Click Start Polling for Messages again.
04:11This time the Receive Count is set to 2.
04:13This indicates that the message was not deleted and has been claimed a second time.
04:18This time, delete the message by clicking the check box under Delete and then delete
04:23one message. Am I sure? Yes.
04:27Now that the queue has been created, I am going to switch to the terminal with the remote
04:30connection, reconnecting is necessary and update the configuration; sudo nano -w /var/www/html/config.
04:44For the queue name, same as I put in the queue, images-to-watermark, save and exit.
04:55Now that I have a mechanism for communicating, I'd like to send a message to myself when
04:58processing is complete.
Collapse this transcript
Pushing notifications with the Simple Notification Service
00:00The Amazon Simple Notification Service, or SNS for short, is a web service for sending notifications.
00:07SNS allows push notifications to be sent, meaning that whatever is getting the notifications
00:12doesn't need to check SNS for them.
00:15Notifications can be sent to a number of recipients, including http for servers, email and SMS
00:21text messages to phones.
00:24Amazon SNS is intended for small notifications and the features of this service reflects that.
00:29Starting with a maximum size of messages which is 64 kilobytes, the restriction is structured
00:35to be optimal for simple text and tiny structures.
00:38Each subscriber must confirm the subscription to notifications which can be a bit cumbersome,
00:42especially as this includes servers which need a mechanism for confirming this description
00:47in addition to being able to receive a notification.
00:50The notifications can be sent in one or two formats: plain text, emails and SMS and JSON
00:55encoding which is optimal for server communication.
00:58SNS in good for internal status notifications in triggered events such as a payment problem
01:04being reported to an administrator or server triggers.
01:08It's not good for mass emailing users due to subscription confirmations, formats and
01:12other limitations.
01:14The Simple Notification Service has its own set of terminology, but in comparison to other
01:19services, it's fairly small.
01:20A topic is the name of a group of subscribers.
01:24Topics are typically named for the subject in the notifications or some sort of underlying
01:28event type like server failures.
01:31As mentioned before, clients can subscribe to a topic.
01:35A topic owner, the account that created the topic, can directly subscribe clients.
01:40Regardless of how they subscribe, the client must always confirm the subscription.
01:45Amazon SNS subscriptions are interesting as it really highlights how these messages are
01:50not intended for end-users.
01:52Subscribers to SNS notifications need to specify the protocol and endpoint.
01:57The protocol can be things like HTTP, email, SNS and others, while the endpoint is the
02:02recipient address; this includes an email, a URL or SMS number.
02:08When the description request is received, a confirmation is sent for explicit opt-ins
02:12by replying or clicking a link.
02:15If unconfirmed, no notifications will be sent and an unconfirmed subscriber will be removed in three days.
02:22Notifications are not archived so they can't be reviewed later, so take that in a consideration
02:27when using the service.
02:28There is a number of different steps in the SNS workflow so I'll go over at a high level.
02:33First, the topic is created.
02:36A recipient subscribes and confirms their notification.
02:40Notifications are then published which trigger SNS to deliver the notifications.
02:44Now that I've provided some context into how Amazon SNS works, let's configure it for use
02:49with the watermarking application.
02:52Switching to the browser, go to the Management Console.
02:55From the menu, go to Services > App Services > SNS. As this is the first time that I have
03:02used this system, there are no topics created.
03:05I can see the number of SNS topics that I managed along with a number of subscriptions
03:09in the upper right-hand corner.
03:10I am going to create a new topic for the watermarking application.
03:14So click on Create New Topic.
03:17Topic names are 256 alphanumeric characters, meaning A through Z, 0 through 9, additionally
03:23hyphen and underscore included.
03:25I am going to use SNS to notify myself when there is a bad uploaded file.
03:28So I'll give it a logical name, watermark-bad-upload.
03:35For the Display Name, I'll give it a descriptive short name, WM for watermark, BAD for problem
03:42and UL for upload.
03:44Remember, these notifications aren't for end-users, so briefness is just fine.
03:49When ready, click Create Topic.
03:53Basic information about the topic is now shown.
03:57Topic Amazon Resource Name, or ARN, is a unique identifier that is used programmatically.
04:02Topic owner, region and display name are self-explanatory.
04:06The list of subscribers which is currently empty is shown below.
04:11Clicking on all topic actions, a number of options are shown.
04:16Publish sends messages which I'll demonstrate shortly.
04:20Topic Policy gives control over who can publish and subscribe to the topic.
04:24The topic delivery policy controls the number of retries, seconds of delay and rate limiting.
04:31It's good to have an audience for notifications even if it's an audience of one, so click
04:35Create New Subscription.
04:38This form has two options, the first, protocol, determines how messages will be delivered.
04:43I want to send myself a text message whenever there is a non image uploaded, so I'll select
04:47SMS as the protocol.
04:50The end-point indicates where the messages are going which be my producer cell phone
04:54number, click Subscribe.
04:59A text message will be sent from Amazon to confirm the subscription.
05:03Follow the directions which are to reply with a text saying yes and the display name.
05:07So it will be yes WMBADUL.
05:12Once confirmed, close the message, click Refresh on the AWS page.
05:17An explicit subscription ID is shown along with a protocol endpoint and subscriber.
05:23To test the system, click Publish to Topic.
05:29The first input is the subject which should be left off for SMS messages.
05:33If it's included, it will be shown as part of the message.
05:36The larger box for message is self-explanatory, but the option beneath allows different messages
05:41per protocol, such as a longer more detailed message via email and a simple summary as text.
05:47Leave the subject blank and for the message, I'll type "There was a bad upload from IP..."
05:57Click Publish Message, it will show a confirmation and just click Close.
06:02In a moment, a message will be sent.
06:06SMS messages will append the display name followed by the message itself.
06:10Now that I have the topic, I can perform the final piece of configuration.
06:13I am going to switch to the terminal with a remote connection and reconnect if necessary.
06:18Then I'll update the configuration: sudo nano -w /var/www/html/config.inc.php, and for the
06:30SNS_TOPIC we'll do watermark- bad-upload, exit and save.
06:39The notification will now be published if someone uploads non-image file to the watermarker.
06:44That's not practical in the long run, but it's good for testing.
06:47I have all the services necessary for the watermarker, so it's time to put it all together.
Collapse this transcript
Putting it all together
00:00Throughout this chapter, I've been assembling a suite of services in order to build an image
00:04watermarking application.
00:06Now that they've been fully configured, I'll demonstrate the result.
00:10In the browser, navigate to the DNS name of the EC2 server.
00:15Make sure you reload the page.
00:17All four configuration checks should now be shown in green.
00:20If not, go back to the correlating segment and follow the directions at the end.
00:24In the menu at the top, click Show.
00:27This will show all of the watermarked images that have been both uploaded and processed.
00:31Right now, we don't have anything to show, click Upload.
00:35This is a very simple interface for uploading an image.
00:38Click Choose File, or Browse depending on the browser, and select an image to be watermarked.
00:43So I'm going to go into my Downloads folder and I have the two exercise files for the
00:47course which is Beach.jpg and NotAnImage.text.
00:51I'm going to select the Beach.jpg, click Upload Image.
00:58Upon completion, there should be four success messages: Uploaded image to Amazon S3, Item
01:04added to Amazon SimpleDB, Filename added to SQS queue for processing and Uploaded file
01:09metric added to CloudWatch.
01:11In a different tab, go to the Management Console then click on S3, click on your bucket and
01:20the newly uploaded file should be shown.
01:23Go over to SdbNavigator and click Run query.
01:29An item with a watermark value of n along with a height and width will be shown.
01:34Returning to the Management Console, go to SQS, the images to watermark queue has a message available.
01:42There is a lot of individual components to keep track of, and AWS provides facilities
01:46to keep track of how each service is operating.
01:49Amazon CloudWatch automatically monitors AWS resources out of the box meaning no additional
01:54configuration is required.
01:55It can also record custom metrics which I've added too to the watermarking application to demonstrate.
02:01It does take a couple of minutes before the metrics are reflected in the interface.
02:05So if a metric isn't immediately visible, wait a few minutes and try again.
02:10Go to CloudWatch which is found in Services > Deployment & Management > CloudWatch.
02:18There are a number of metrics available out of the box and the application also logs a
02:21couple of metrics on its own.
02:23Click View Metrics, EBS, EC2, SNS and SQS metrics are shown.
02:37To see details, click on NumberOfMessagesSent under the SQS queue.
02:43I can see that there is activity.
02:45I can also change the Time Range by clicking on Zoom to only seen what's happened in the last hour.
02:51I can filter the metrics by using the search bar at the top which I'm going to do now.
02:55Type watermark and click Search. Clicking on the row for UploadedFiles, I can see that
03:04I uploaded an image.
03:06Switching back to the watermarking application, click Process. It should show the name of
03:15the image being processed and our message receipt handle, then a series of a success
03:19messages: image downloaded from S3, watermarked image being uploaded S3, message deleted from
03:26SQS, item updated in Amazon SimpleDB, and the Processed file metric added to CloudWatch.
03:33If you go to Show, the image that I uploaded is shown processed with the watermark on it.
03:40Now I'm going to test notifications, so go back to Upload, then Choose File and then
03:47I'm going to upload NotAnImage.text, click Open and then Upload Image.
03:54This time an error message will be shown and a notification will be sent via Amazon SNS.
03:59Throughout this chapter, I've explored a number of major and common application platform services.
04:05I can figure the software developer kit with AWS credentials, then I stored objects in
04:10Amazon S3, I created items with attributes in Amazon SimpleDB for record storage.
04:16I managed the application workflow using this simple Queue service.
04:21On a bad upload, notifications were sent with a simple notification service.
04:25And finally, I put everything together and demonstrated how the individual pieces fit
04:29within the watermarking application.
04:32Given the variety of options available, a good question is whether or not AWS is a good
04:36fit for your needs.
04:37I'll explore that in the final chapter along with where to go from here.
Collapse this transcript
Conclusion
Determining if AWS is a good fit
00:00Amazon offers a wide variety of web services from standalone application components to
00:05enterprise grade parallel data processing.
00:07However, just using AWS isn't going to auto-magically fix your problems.
00:12Depending on your needs, it may be a square peg in a round hole; not a good fit.
00:17A good exercise is to determine at a high level what your actual needs are, then try
00:21to map them to the available services.
00:25Their pricing strategy, which in general is pay for what you use, offers flexibility which
00:29is especially good for resources that are only needed for short periods of time.
00:34Consider the overhead of purchasing, storing and maintaining hardware as well.
00:39CloudWatch also offers alerts on billing thresholds, which can be a good canary in a coal mine
00:43if a resource starts being utilized more than expected.
00:47Some AWS services can be used independently like using S3 for file storage.
00:51But if I were to use the relational database service with a remote web server, performance
00:56would really suffer.
00:57While this doesn't mean it's all or nothing, consider the service interdependencies and
01:01performance, as a transition from an existing solution to AWS, may be more involved.
01:07Using Amazon Web Services and other Cloud service providers requires multifaceted trust,
01:13in particular that your proprietary and confidential code and data remains private, secure and reliable.
01:19With that said, Amazon Web Services is a well-established and known quantity.
01:24As an example one of the regions it's available is the AWS GovCloud.
01:28It was designed for US government agencies and their clients to address regulatory and
01:32compliance requirements.
01:33If it's good enough for the US government, it might be an option for you.
01:37While I don't like using this phrase, cloud services does represent a paradigm shift.
01:42Some may find resistance within their own organizations from both those who are operating
01:46on stereotypes and aren't really familiar with cloud services, and those who have a deep
01:50knowledge of the risks and benefits of working with cloud services and may have had a bad experience.
01:56For example, one of my clients who provides cloud services was criticized in an internal
02:00security audit for using cloud services.
02:04This type of policy inconsistency is surprisingly common in large organizations.
02:09As I learned from a previous employer, you can't turn a cruise ship on a dime, but you
02:12can steer it gradually until it's facing in the opposite direction.
02:17In short, this means change can be difficult for a large organization, but slow and steady
02:21persistence will see positive results.
02:24Finally, take some time and evaluate other solutions as Amazon is not the only Cloud
02:29service provider.
02:30Ultimately, you are the one who's in the best position to be able to determine what service
02:34provider, if any, is able to fulfill your needs.
02:37Research, comparison, and due diligence will save you potential headaches and money.
02:43Now if you wanted to continue learning about Amazon Web Services, what are some directions
02:46you could take?
Collapse this transcript
Where to go from here
00:00This course offered a broad survey of Amazon Web Services, but didn't cover every facet
00:05to its fullest extent.
00:07Reading the friendly manual is a great way to learn more.
00:10I'm going to share three useful links.
00:12The first is for official documentation which is a collected gateway to each of the services
00:16docs, the second for articles and tutorials, abstractly walks through a number of examples
00:21of each of the services.
00:23The final URL is for code examples and software developer kits for integrating AWS into applications.
00:30I recommend taking a moment and reading through the source code of the watermarking application,
00:34even if you're not a PHP developer.
00:36I made a point of clearly commenting the code to explain what's going on and why and wrote
00:41it in a procedural step-by-step style to improve readability.
00:44Feel free to use the logic in your own applications as well.
00:48The CloudWatch monitoring system also provides configurable alerts that react in metrics
00:52that you specify.
00:54Try creating some alerts like ten uploads in a minute triggering a notification to you.
00:59Another way to learn more is just by doing it.
01:02Set up your own web server and deploy either your own application or an open-source application.
01:07Try using Amazon relational database service instead of MySQL.
01:11Finally, when you're done experimenting with the watermarking application, remember to
01:15turn it off or destroy it completely using the Management Console.
01:18This includes the EC2 server, the S3 bucket, the SimpleDB domain which you are going to
01:24have to use the browser client to do, the Amazon SQS queue and the SNS topic.
01:30This way you're not consuming resources that aren't actively being used.
01:34As I've learned, a clean site is a happy site.
Collapse this transcript
Farewell
00:00Amazon Web Services is a fascinating topic with a lot of depth.
00:04There are areas that I haven't even touched like parallel processing, which offer interesting
00:08and extreme solutions.
00:10In my lifetime, computer science and hardware has improved and grown so much and continues
00:15to change at an amazing pace.
00:17By combining these innovative technologies in new ways, cumbersome solutions become lightweight,
00:22modular, and elegant which allows focus on newer, greater challenges.
00:26I appreciate your time and I hope you enjoyed watching this course as much as I enjoyed
00:30writing it and recording it while working with the team at lynda.com.
00:34Please take a moment to provide feedback through the course homepage on lynda.com.
00:38Thank you.
Collapse this transcript


Are you sure you want to delete this bookmark?

cancel

Bookmark this Tutorial

Name

Description

{0} characters left

Tags

Separate tags with a space. Use quotes around multi-word tags. Suggested Tags:
loading
cancel

bookmark this course

{0} characters left Separate tags with a space. Use quotes around multi-word tags. Suggested Tags:
loading

Error:

go to playlists »

Create new playlist

name:
description:
save cancel

You must be a lynda.com member to watch this video.

Every course in the lynda.com library contains free videos that let you assess the quality of our tutorials before you subscribe—just click on the blue links to watch them. Become a member to access all 104,069 instructional videos.

get started learn more

If you are already an active lynda.com member, please log in to access the lynda.com library.

Get access to all lynda.com videos

You are currently signed into your admin account, which doesn't let you view lynda.com videos. For full access to the lynda.com library, log in through iplogin.lynda.com, or sign in through your organization's portal. You may also request a user account by calling 1 1 (888) 335-9632 or emailing us at cs@lynda.com.

Get access to all lynda.com videos

You are currently signed into your admin account, which doesn't let you view lynda.com videos. For full access to the lynda.com library, log in through iplogin.lynda.com, or sign in through your organization's portal. You may also request a user account by calling 1 1 (888) 335-9632 or emailing us at cs@lynda.com.

Access to lynda.com videos

Your organization has a limited access membership to the lynda.com library that allows access to only a specific, limited selection of courses.

You don't have access to this video.

You're logged in as an account administrator, but your membership is not active.

Contact a Training Solutions Advisor at 1 (888) 335-9632.

How to access this video.

If this course is one of your five classes, then your class currently isn't in session.

If you want to watch this video and it is not part of your class, upgrade your membership for unlimited access to the full library of 2,025 courses anytime, anywhere.

learn more upgrade

You can always watch the free content included in every course.

Questions? Call Customer Service at 1 1 (888) 335-9632 or email cs@lynda.com.

You don't have access to this video.

You're logged in as an account administrator, but your membership is no longer active. You can still access reports and account information.

To reactivate your account, contact a Training Solutions Advisor at 1 1 (888) 335-9632.

Need help accessing this video?

You can't access this video from your master administrator account.

Call Customer Service at 1 1 (888) 335-9632 or email cs@lynda.com for help accessing this video.

preview image of new course page

Try our new course pages

Explore our redesigned course pages, and tell us about your experience.

If you want to switch back to the old view, change your site preferences from the my account menu.

Try the new pages No, thanks

site feedback

Thanks for signing up.

We’ll send you a confirmation email shortly.


By signing up, you’ll receive about four emails per month, including

We’ll only use your email address to send you these mailings.

Here’s our privacy policy with more details about how we handle your information.

Keep up with news, tips, and latest courses with emails from lynda.com.

By signing up, you’ll receive about four emails per month, including

We’ll only use your email address to send you these mailings.

Here’s our privacy policy with more details about how we handle your information.

   
submit Lightbox submit clicked