1. IntroductionWelcome| 00:04 | Welcome to Interaction Design
Fundamentals. I am Dave Houge, and in this
| | 00:08 | course, we are going to explore the
essential principles and best practices for
| | 00:12 | designing better and more effective
interactions for nearly any interface or device.
| | 00:17 | We will start by taking a glimpse back
at the origins of interaction design, and
| | 00:20 | move to the present day, with an
introduction to the tools and techniques used by
| | 00:24 | today's interaction designers.
| | 00:27 | From there, we'll build a foundation
with the five essential principles of
| | 00:30 | interaction design, and get a better
understanding of how people perceive,
| | 00:34 | process, understand, learn, and
remember information in digital environments.
| | 00:39 | People's needs and expectations
strongly influence their behaviors when
| | 00:43 | interacting with devices,
| | 00:45 | so we'll explore how to create
successful designs, while keeping motivation
| | 00:49 | and context in mind.
| | 00:50 | We'll talk about how the structure and
organization of content and functionality
| | 00:55 | facilitates interaction,
| | 00:57 | discuss how our sensory systems
influence our perceptions, and how we can craft
| | 01:01 | designs to help guide attention, and
enhance memory. And through it all, we will
| | 01:06 | show examples, and offer explanations
of how to apply this information to your
| | 01:10 | own design methods and techniques.
| | 01:12 | So let's get started with
Interaction Design Fundamentals.
| | Collapse this transcript |
| Who is this course for?| 00:00 | So, who is this course for?
| | 00:02 | Well, it's for students,
designers, and developers;
| | 00:05 | anyone who wants to better understand
how people think, why we behave the way we
| | 00:09 | do, and how to create interfaces
that meet our needs more effectively.
| | 00:13 | We're not satisfied with
good enough; we strive for best.
| | 00:17 | This is not a course that teaches
how to create design documents, like
| | 00:21 | wireframes, icons, and
infographics, or how to use design software.
| | 00:25 | This course focuses on how to approach
interaction design, better understand how
| | 00:30 | people think, and how to make
better interaction design decisions.
| | 00:33 | We will ask questions, discuss design
principles, and show how to create more
| | 00:38 | effective, and more
considerate interfaces and devices.
| | 00:41 | We will discuss interactivity from
multiple perspectives, but nearly all of it
| | 00:45 | will be in the context of psychology,
to help us better understand how people
| | 00:49 | think, and act, so that our designs
truly work well, and meet their needs.
| | 00:54 | Although interaction designers define
the structure and behavior of interactive
| | 00:58 | systems and devices, we should
not focus solely on the interface.
| | 01:02 | Interaction design is really about
the behavior of people. Too many of the
| | 01:07 | devices, software, and tools we use
complicate, obstruct, or delay our tasks, and
| | 01:13 | in some cases, even thwart our efforts completely.
| | 01:16 | We should strive to craft interfaces,
systems, and devices that enhance
| | 01:20 | our productivity, facilitate our
actions, meet our needs, create value, and
| | 01:25 | even provide enjoyment.
| | 01:27 | So let's get started!
| | Collapse this transcript |
|
|
2. Exploring Interaction DesignWhat is interaction design?| 00:00 | Interaction design is easier to define
by what it isn't than by what it is.
| | 00:05 | Many people think of interaction design
as rollovers, pop-ups, transitions, and
| | 00:09 | animations, but it's actually a
complex and wide ranging field that covers
| | 00:13 | nearly all aspects of human
cognition, emotion, and behavior.
| | 00:17 | There are also many different job
titles and terms we may encounter, such as
| | 00:21 | interaction designer, interactive
designer, and experience designer.
| | 00:25 | Although there may be subtle
differences in our job descriptions and the tasks
| | 00:28 | we undertake, in the end, we're all
still focused on creating engaging
| | 00:32 | interactive experiences
between people and devices.
| | 00:36 | Interaction design is also more than
just drawing the interface, and showing how
| | 00:40 | to gather and display information.
| | 00:42 | It's about designing for the entire
interconnected system: the device, the
| | 00:46 | interface, the context, the
environment, and the people.
| | 00:50 | If you really want to get specific,
according to the Interaction Design
| | 00:53 | Association, interaction design, often
abbreviated IxD, defines the structure and
| | 00:59 | behavior of interactive systems.
| | 01:01 | Interaction designers strive to create
meaningful relationships between people
| | 01:06 | and the products and services that they
use, from computers, to mobile devices, to
| | 01:10 | appliances, and beyond.
| | 01:12 | But interaction design is not a new field.
| | 01:15 | Let's go back to the beginning of
personal computers and Desktop software.
| | 01:19 | One of the first proponents of
interaction design in the 1980s, Bill Verplank,
| | 01:23 | focused on designing for people; for
their physical and emotional needs, and
| | 01:28 | increasingly, for their intellect.
| | 01:30 | He consistently considered three central
questions about doing, feeling, and knowing
| | 01:35 | to help us focus our designs.
| | 01:38 | Gillian Crampton Smith established the
first program in interaction design at
| | 01:42 | the Royal College of Art in London, and
she says that interaction design is not
| | 01:46 | just about making us more efficient at
work. We use devices in all aspects of
| | 01:51 | our everyday life,
including play and entertainment.
| | 01:54 | And Bill Moggridge, also a pioneer of
interaction design, has taught us about
| | 01:58 | user-centered design, and the value of
prototyping to test our ideas with real people.
| | 02:03 | So you see, it's not just about the
interface. Interaction design is about
| | 02:08 | the behavior of people.
| | Collapse this transcript |
| The origins of interaction design| 00:00 | Let's take a brief trip back in
time to explore the origins of modern
| | 00:04 | interaction design for the
devices in our lives today.
| | 00:07 | This is not a trip into recent history.
Rather, we can look back over the history
| | 00:11 | of humankind to see how we created
tools and methods for recording information,
| | 00:16 | such as painting and writing,
manipulating information, such as triangulating
| | 00:21 | and calculating, and communicating,
such as printing and transmitting.
| | 00:26 | Each of the following historical
moments has led to where we are today.
| | 00:29 | We are able to record, manipulate,
and share information about anything, at
| | 00:33 | any time, with anyone.
| | 00:35 | Almost two and a half million years ago, we began
manipulating our environment with the
| | 00:40 | first stone tools. Some of the oldest
ever found are from the Olduvai Gorge. As
| | 00:45 | hunter-gatherers, we relied on these
stones just to stay alive, but spoken
| | 00:50 | languages began to emerge and evolve.
| | 00:52 | About 100,000 years ago, we started to
communicate more effectively with each other.
| | 00:57 | We relied on oral histories and personal
contact to communicate information, but
| | 01:02 | our inner creativity, and the need to
share information with others, or even just
| | 01:07 | record our shared stories and
experiences, led us to begin drawing, and about
| | 01:11 | 16,000 years ago some particularly
talented ancestors recorded their stories for
| | 01:17 | us on the walls of Lascaux cave.
| | 01:19 | It would take nearly 14,000 years for
literal drawings and pictures to become
| | 01:24 | the symbols, letters, and words that
would make up the first written languages.
| | 01:29 | Sumerian cuneiform text in clay,
Egyptian hieroglyphics on papyrus, and
| | 01:34 | Babylonian maps, are among
our oldest written records.
| | 01:38 | Around the same time, we see our first
tool for manipulating information: the
| | 01:43 | abacus, which made math easier.
| | 01:45 | Once we began writing down information,
drawing maps of the world around us, and
| | 01:50 | making complex calculations easier,
our knowledge expanded rapidly.
| | 01:55 | Around 300 years before the common era,
or BCE, the Royal Library of Alexandria
| | 02:01 | was founded as a repository of information.
| | 02:04 | We were recording so much information
that we needed to organize, store, and
| | 02:08 | create a system for making
it available and findable.
| | 02:12 | Our mechanical skills quickly
improved. Soon we were measuring time and
| | 02:16 | distance with clocks and astrolabes, as
well as calculating the dates of future
| | 02:21 | astronomical events with the Antikythera
mechanism; our first mechanical computer.
| | 02:27 | Our initial measurements of time
and distance were crude, but our clocks,
| | 02:31 | compasses, and calendars improved.
| | 02:33 | Our knowledge of the world continued
to grow rapidly, and we could no longer
| | 02:37 | efficiently capture and store information
just by hand writing and hand copying books.
| | 02:42 | The introduction of Gutenberg's
printing press in 1440 suddenly made it
| | 02:47 | possible to record more information, and make
it available to more people than ever before.
| | 02:52 | Our exploration of the world, the
development of the scientific method, and the
| | 02:57 | abundance of information being created
required even more elaborate and capable
| | 03:02 | mechanical calculators, like the slide
rule, and the Stepped Reckoner, to help us
| | 03:06 | make sense of it all.
| | 03:08 | Charles Babbage's Difference
Engine in 1822 was our first step toward
| | 03:12 | programmable computers.
| | 03:14 | And in the decades after this, we saw
the invention of the first photographs
| | 03:18 | for capturing a realistic visual record,
as well as both the telegraph, and the telephone.
| | 03:24 | We no longer needed to physically
transport books, or personally travel, to
| | 03:28 | access information;
| | 03:30 | we could now transmit information
nearly instantly over vast distances.
| | 03:35 | Photography, telegraphy, and telephony
advanced quickly, and in 1897, the first
| | 03:41 | wireless transmissions over
radio were made by Nikola Tesla.
| | 03:45 | The inventions and advancements of the
industrial age generated huge amounts of
| | 03:49 | visual, verbal, and written information,
and by 1910, Paul Otlet recognized the
| | 03:54 | need for a system to organize all of it,
so he created the Mundaneum to gather
| | 03:59 | and classify all of the world's knowledge.
| | 04:02 | The Mundaneum has been called the
forerunner of the Internet, because of its
| | 04:05 | attempt to connect everything
meaningfully. Meanwhile, technology progressed
| | 04:10 | rapidly, and our mechanical calculators and
computers began to be replaced with electronics.
| | 04:15 | First came vacuum tubes, which made
television signals possible, and which
| | 04:19 | powered the world's first
electronic computers during World War II.
| | 04:23 | The war also pushed many technological
advancements, and changed the way we
| | 04:27 | thought about computers,
and what they can do for us.
| | 04:30 | In 1945, Vannevar Bush published an
important essay, As We May Think, in which
| | 04:36 | he proposed that we need to create a
machine, which he called the Memex, that
| | 04:39 | would become our collective memory,
store information, and make it accessible to
| | 04:44 | us, so that we would be less likely
to repeat the mistakes of our past.
| | 04:48 | He wanted to transform the information
explosion into a knowledge explosion, and
| | 04:54 | he may have sowed the seeds of the Internet.
| | 04:57 | In 1947, the transistor was introduced, and
the world of electronics began to shrink.
| | 05:02 | Televisions began to appear in our
homes, and radios could be carried with us.
| | 05:06 | In the 1950s, we sent our first manmade
objects into space, and just a few years
| | 05:11 | later, in 1962, Telstar started
the era of satellite communications.
| | 05:16 | We could now transmit even more
information, more quickly, across longer
| | 05:21 | distances than ever before.
| | 05:23 | The late 1960s were an important
period, because Douglas Engelbart gave his
| | 05:28 | Mother of All Demos, in which he showed
us his vision of the modern computer,
| | 05:32 | complete with a mouse.
| | 05:34 | And in 1969, the Advanced Research
Projects Agency Network, also known as
| | 05:39 | ARPANET, was launched.
| | 05:41 | It would go on to become the
basis for today's Internet.
| | 05:44 | The arrival of the Intel
microprocessor in 1971 ushered in the next era
| | 05:50 | of miniaturization.
| | 05:51 | And it wasn't long before we saw the first
digital watches, and computer video games.
| | 05:56 | We could now report vast quantities
of information, process it, and make it
| | 06:01 | available all around the world.
| | 06:04 | Advances in technology
moved very quickly from here.
| | 06:07 | The Xerox Alto was the first personal
computer meant for businesses, and the
| | 06:12 | Altair 8800 was the first computer
sold as a do-it-yourself kit to hobbyists.
| | 06:17 | This showed that there was a home market.
1977 was a big year, because three of
| | 06:23 | the first truly personal computers, as
well as the Atari 2600 gaming system,
| | 06:28 | were all introduced.
| | 06:29 | These early computers had command
line interfaces, and often needed to be
| | 06:33 | programmed, so their appeal
and usefulness was limited,
| | 06:37 | but computers were in the home.
| | 06:40 | Just a few years later, the mobile phone
was introduced to the public, and in 1984,
| | 06:44 | the Apple Macintosh arrived.
| | 06:47 | Although the Xerox Alto was the first
graphical user interface, the Macintosh
| | 06:51 | was the first computer to bring that
interface into the home. Suddenly computers
| | 06:56 | became easier to use and understand.
| | 06:58 | By the late 1980s, computers were
becoming commonplace, portable gaming devices
| | 07:03 | were everywhere, and Tim Berners-Lee
had drafted his proposal for an
| | 07:07 | interconnected network of computers for
sharing information based on hypertext.
| | 07:13 | On August 6, 1991, CERN launched the
world's first Web site, based on the proposal
| | 07:19 | of Tim Berners-Lee, and the Web was born.
| | 07:23 | Mosaic, the first Web browser, was
launched in 1993, and just two years later, the
| | 07:28 | browser wars began, as millions and
millions of people got on the Web.
| | 07:33 | In the meantime, mobile phones merged
with digital cameras, personal computers
| | 07:38 | became portable, commercial GPS data
became available, and the digital music
| | 07:43 | players were everywhere.
| | 07:45 | We could talk, send text messages,
capture photos, locate ourselves, get
| | 07:50 | directions, and take our
music and games with us.
| | 07:53 | Suddenly we had the ability to record,
manipulate, and communicate and share
| | 07:58 | information, with the
electronic tools in our pockets.
| | 08:02 | As these devices evolved and improved,
we also saw a shift from indirect actions,
| | 08:07 | such as command line interfaces, and
using computer mice, to more direct action,
| | 08:12 | such as touching and moving
the icons directly on a screen.
| | 08:16 | And most recently, we've seen the
introduction of spatial gestures. We no longer
| | 08:20 | even need to use a mouse, or touch a
screen, to interact with our devices.
| | 08:24 | It's been a long 2.4 million years, and
we've come a long way, but in the end, we
| | 08:30 | are still just trying to record,
manipulate, and share information.
| | 08:35 | The tools are different, and continually
changing, but our needs are the same,
| | 08:39 | and as interaction designers, our
goal is to make certain that these tools
| | 08:44 | never get in the way.
| | Collapse this transcript |
| What interaction designers contribute| 00:00 | Interaction designers work on
interdisciplinary teams of various sizes, and
| | 00:05 | collaborate with nearly every
professional involved in the creation of the
| | 00:08 | interfaces and devices all around us.
| | 00:11 | I often say that if you want an
instant subject matter expert, just add an
| | 00:15 | information or interaction
designer to the project,
| | 00:17 | because it is our responsibility to
understand the business requirements, the
| | 00:21 | user's needs and expectations,
the technological opportunities and
| | 00:25 | constraints, the purpose of the device or
interface, and the context in which it will be used.
| | 00:31 | We also need to be able to quickly and
frequently change our perspective on a
| | 00:35 | project, from the high-altitude strategic
view, to the detailed tactical view, and back.
| | 00:41 | This includes answering questions
from both the strategic and the tactical
| | 00:44 | perspectives, such as, why does the
device or interface exist? What's the
| | 00:48 | business model? What value does it
provide? What are the users needs? How does
| | 00:53 | it work? What's the design solution?
| | 00:56 | So the interaction designer brings
diverse skills to collaborate in constantly
| | 01:00 | changing, challenging, and
innovative environments.
| | Collapse this transcript |
| Understanding the interaction design process| 00:00 | How do we work?
What are our processes and methods?
| | 00:04 | There are many ways to accomplish our
design goals, and different teams, agencies,
| | 00:08 | and companies will have their own methods.
| | 00:10 | Some still work in a traditional
waterfall process, where each discipline
| | 00:13 | completes their contribution
before the next discipline begins.
| | 00:17 | Others are influenced by the agile
approach, and the entire team tackles one
| | 00:21 | component of the project
at a time in rapid sprints.
| | 00:24 | And there are also a myriad of hybrid
approaches that leverage iteration,
| | 00:28 | collaboration, and testing.
| | 00:30 | There is no single way to achieve strong
designs in the desired outcomes, but at
| | 00:35 | the core, there are some processes in
which nearly all interaction designers
| | 00:39 | engage: definition, research, ideation,
design, prototyping, observation, and
| | 00:46 | iteration as needed.
| | 00:48 | Let's go through each of
these, starting with definition.
| | 00:52 | We must define the project, the
product, and the design task at hand, and
| | 00:57 | understand what we are
trying to solve or achieve.
| | 01:00 | How we define a problem determines
if and how we are able to solve it.
| | 01:05 | Once we have defined the
project, then we begin our research.
| | 01:09 | This occurs in parallel with
nearly every step in our process.
| | 01:13 | We need to gather data to help us,
define the product or the problem,
| | 01:18 | generate ideas and potential solutions,
inform our designs and prototypes, and
| | 01:24 | guide our iterations.
| | 01:26 | We use data to form the foundation of
our final product, and our initial research
| | 01:31 | gives us direction, but we
should always be seeking more data.
| | 01:34 | Research never really ends.
| | 01:37 | Once we understand the problem, we need to
generate possible solutions through ideation.
| | 01:43 | We think, collaborate, sketch,
ponder, wonder, dream, and draw.
| | 01:48 | We generate many possibilities,
select a few, and pursue only the best.
| | 01:53 | It has been said that Apple
encourages a process where their teams generate
| | 01:57 | 10 viable ideas, select the top three,
and refine them, then choose only the
| | 02:02 | best one to pursue.
| | 02:05 | They throw away 90% of their
ideas in order to achieve their best.
| | 02:11 | In design and prototyping, we create
models of our possible solutions to test
| | 02:16 | them, ensure they are complete, and
that they help people solve problems,
| | 02:20 | complete tasks, and achieve goals.
| | 02:23 | Our designs range from low fidelity sketches,
to high fidelity wireframes, and visual comps.
| | 02:28 | And our prototypes vary from simple
objects, or click through sequences, to
| | 02:33 | elaborate functional
interfaces, and mock devices.
| | 02:36 | But most importantly, we take the
time to think through the steps of the
| | 02:40 | interactions, to consider the
presentation of the information, and to validate
| | 02:46 | the purpose and value of the product.
| | 02:48 | No design is perfect the
first time it is drawn or modeled.
| | 02:52 | That's where iteration comes in.
| | 02:54 | No matter how carefully we try to
consider the needs and perspective of others,
| | 02:58 | we can't anticipate everything.
| | 03:00 | We're not designing for ourselves, and
when we observe others interact with our
| | 03:05 | designs and prototypes, we inevitably
find opportunities to improve the design,
| | 03:10 | and so we iterate, evolving the design
as we gather data, making it better with
| | 03:15 | each round until it is ready.
| | 03:17 | When a product, device, or interface is
ready to launch, we are still not finished.
| | 03:22 | Iteration is more than just refining the design.
| | 03:25 | It also includes monitoring the
performance, usability, accessibility, and the
| | 03:30 | success of a product after it's been released.
| | 03:33 | We gather data, speak with and observe users,
gather usage and error data, and look
| | 03:39 | for opportunities to modify,
improve, and enhance that product.
| | 03:43 | Then the design process
continues again, and again.
| | Collapse this transcript |
|
|
3. The Interaction Designer's ToolboxTools and techniques used by interaction designers| 00:00 | Interaction designers use many tools,
and we have a vast array of techniques to
| | 00:04 | help us generate and identify
potential solutions. We are pragmatic.
| | 00:08 | We apply our skills, and select our tools,
based on the problems we need to solve,
| | 00:13 | the solutions we need to communicate,
and the people with whom we are working.
| | 00:16 | We often start very low-tech,
with pencil and paper,
| | 00:19 | sketchbooks, sticky notes, note cards, and
even whiteboards to help us understand,
| | 00:24 | define, and frame the problem.
| | 00:26 | Early visualizations with diagrams,
models, and flows help us identify potential
| | 00:31 | directions, missing information,
and the most appropriate next steps.
| | 00:35 | And these early sketches can also help
develop consensus about what problems we
| | 00:39 | are solving, and what goals
we are trying to achieve.
| | 00:42 | As our designs progress, we typically
need an increasing level of detail and
| | 00:47 | fidelity. Pen and paper sketches can
capture the concept, but eventually we
| | 00:51 | need to put pixels on screens.
| | 00:54 | There are many design and diagramming
tools available, and a growing number of
| | 00:57 | Web-based tools may be used.
| | 00:59 | Choose tools that allow you to
work effectively and efficiently.
| | 01:03 | You should spend your time
thinking about solving problems.
| | 01:06 | As long as you're able to capture,
represent, and communicate your ideas and
| | 01:10 | design intentions effectively,
almost any tool can be valid.
| | 01:14 | Our problems and design challenges are
becoming increasingly complex, because
| | 01:19 | technology and people's
expectations are changing rapidly.
| | 01:23 | We need to go beyond to simply
drawing our solutions, and create interactive
| | 01:27 | prototypes to validate our ideas.
| | 01:30 | We need to see our designs in use,
and whenever possible, we should put
| | 01:33 | prototypes in the hands of the people
who will be using the interface or device.
| | 01:38 | There are many tools to help us
bring the pixels to life, so choose those
| | 01:42 | that help you best capture the intent and
the experience of the design and the prototype.
| | 01:47 | Remember, you are evaluating the design
solution; not launching the product yet.
| | 01:52 | When we talked about interaction
design as an iterative process, we also said
| | 01:56 | that research and data gathering are
ongoing through design and prototyping.
| | 02:01 | There are many techniques for
gathering information to help us generate ideas,
| | 02:05 | and make design decisions.
| | 02:06 | We study existing products, we
observe people, ask questions based on these
| | 02:11 | observations, and finally, we test prototypes.
| | 02:15 | Some of the ideas and information come
from our own teams as we work together.
| | 02:19 | We brainstorm, we create personas to
better understand people, we conduct task
| | 02:25 | analyses to understand what they're
doing and how they work, we write scenarios
| | 02:29 | to better understand their situations,
and we uncover usability problems with
| | 02:34 | cognitive walkthroughs.
| | 02:36 | Additional information comes from
the people who will actually use the
| | 02:39 | interface or device.
| | 02:40 | We need to learn from real people,
with real needs, in real situations.
| | 02:45 | Watch what they're doing, and
ask them questions about it.
| | 02:48 | There are various methods of doing this,
from ethnography, to surveys, and focus groups.
| | 02:53 | Finally, we can gather information in
laboratory like settings, where we are
| | 02:57 | able to simulate realistic situations.
| | 03:00 | We can use paper, or interactive prototypes,
to test a design for usefulness and usability.
| | 03:06 | If an interface or device has already
launched, we can use data from the Web
| | 03:10 | analytics to evaluate the
performance of the current design.
| | 03:13 | We can even compare different design
options with AB, or multivariate testing.
| | 03:18 | This quantitative information can
be combined with the more qualitative
| | 03:22 | observations and conversations to help
us generate ideas for new solutions, and
| | 03:28 | choose the best designs.
| | 03:30 | Possible design solutions
may be discovered at any time.
| | 03:34 | Although we often begin with lower
fidelity methods, we don't need to start
| | 03:38 | with sketches, proceed to pixels,
then test prototypes. We might start with
| | 03:42 | reviewing Web analytics data on an
existing interface or device, and we're often
| | 03:46 | sketching ideas while observing
people who are working with prototypes.
| | 03:51 | Our tools and techniques can be mixed
and matched as needed to help us solve
| | 03:55 | the problems at hand.
| | 03:57 | When we expect to move in a linear
sequence through the design process, we only
| | 04:02 | restrict ourselves, and make it less
likely that we'll find the best solution.
| | 04:06 | So be flexible, choose your tools, and
adapt your techniques to help you generate
| | 04:11 | the best ideas, and achieve the optimal design.
| | Collapse this transcript |
| Documents created and used by interaction designers| 00:00 | We have various documents that range
from low to high fidelity, and which span
| | 00:04 | research, ideation, planning, and design.
| | 00:07 | There are many ways we capture,
represent, and communicate both the problem, and
| | 00:11 | the solution, and our design
documents help us tell a story.
| | 00:14 | We do not necessarily create all of the
possible design documents for each project.
| | 00:19 | For example, wireframes are created
for most projects to give a rough design
| | 00:24 | layout, but we may only occasionally
conduct a heuristics analysis for quickly
| | 00:28 | identifying usability problems.
| | 00:31 | Just because we have deliverables we
can create, does not mean that we need to
| | 00:35 | create them for every project.
| | 00:37 | We should have a flexible and adaptive
process; one that allows us to understand
| | 00:41 | the goals of a project, and select
the methods and documents that are most
| | 00:44 | appropriate for that project.
| | 00:46 | Our ideas and designs are captured in
living documents, because they continue to
| | 00:51 | be revised, improved, and
modified during the entire project.
| | 00:55 | Site maps may change late in a project,
when we find a better organizational
| | 00:59 | structure, or when new content becomes available.
| | 01:02 | Ongoing user research may help us
refine personas and scenarios, and wireframes
| | 01:07 | change after we gain
insights from prototype testing.
| | 01:10 | The Web is ever changing, and
our design documents reflect that.
| | 01:14 | Many of our documents serve multiple
purposes, but some of the documents that
| | 01:18 | help us capture information about the
design problem include surveys, analytics,
| | 01:23 | and competitive analyses.
| | 01:25 | Every design problem has multiple
facets, and some of the documents we use
| | 01:29 | to develop an understanding of the problem
include sketches, mental models, and flow diagrams.
| | 01:35 | Understanding the problem is
essential for identifying potential solutions.
| | 01:40 | Once we grasp the problem, and have
identified potential solutions, we need
| | 01:44 | to represent those solutions in ways that
make them easy to communicate and understand.
| | 01:49 | We use wireframes to draw
the layout of the screen,
| | 01:52 | storyboards to represent the
sequences of steps, and prototypes to test
| | 01:57 | the interactivity.
| | 01:58 | As our design solutions become
focused and optimized, we add a layer of
| | 02:02 | documentation for the functional
specifications, and remember, select the methods
| | 02:08 | and documents that are most
appropriate for your project.
| | Collapse this transcript |
| Professional resources| 00:00 | There are several professional
organizations whose memberships include
| | 00:03 | interaction designers. Depending on
your background, personal interests, and
| | 00:07 | possibly even where you work,
and the team you work with,
| | 00:10 | there are organizations that can
help you make connections within the
| | 00:13 | design community, pursue professional
training and development, and even find employment.
| | 00:18 | There are many conferences and
seminars of interest to interaction designers.
| | 00:21 | They are organized and offered by
professional organizations, agencies,
| | 00:25 | publishers, and businesses.
| | 00:27 | The list changes and grows all the time.
| | 00:29 | The size ranges from small -- only a few
dozen people -- to very large -- thousands of
| | 00:34 | people -- and they take place all around the world.
| | 00:36 | A few minutes of searching the Web can
help you find conferences and seminars
| | 00:40 | relevant to your work, or near you.
| | 00:43 | The Web is a very, very big place,
and there are more online resources for
| | 00:46 | interaction design than we
could possibly list here.
| | 00:49 | New sites appear, content constantly
changes as technology changes, new books
| | 00:55 | are published, and collections of
examples, tutorials, and techniques are always
| | 00:59 | being assembled and shared.
| | 01:01 | Find the sites you like, find
publishers you trust, and become involved
| | 01:05 | with online discussion groups to
help stay abreast of current topics and
| | 01:08 | trends in the field.
| | 01:09 | Here are just a few sites to get you started.
| | 01:13 | There are also a number of
publishers providing us with both printed and
| | 01:16 | electronic books and videos on
diverse topics from across the wide field
| | 01:20 | of interaction design.
| | 01:22 | You will be able to find introductory
books, specialized books, and academic
| | 01:25 | books filled with current research and theory.
| | 01:28 | Keep in mind that this field changes
quickly, so continue your studies to
| | 01:31 | remain current.
| | Collapse this transcript |
| Fields of study that underlie the work of interaction designers| 00:00 | Until relatively recently, most
interaction designers did not go to school,
| | 00:04 | major in, and earn a degree in interaction design.
| | 00:07 | Many of us have varied and diverse
backgrounds, and we often gravitated to the
| | 00:11 | field because our skills, experience,
knowledge, and interests prepared us to
| | 00:15 | think about interaction
design from different perspectives.
| | 00:19 | We may have worked in libraries, been
graphic designers, written code, or even
| | 00:23 | managed projects, but we all found
ourselves thinking about interaction design,
| | 00:27 | and helping solve design challenges, with many
of the documents and methods just discussed.
| | 00:33 | Today's formal educational programs, and
professional conferences and seminars,
| | 00:37 | recognize the importance of a
broad base of skills and knowledge,
| | 00:41 | and offer opportunities
for a wide range of training.
| | 00:45 | As working designers, we need to
pursue ongoing training and development.
| | 00:49 | We need to balance the benefits of
knowledge across a broad range of topics,
| | 00:53 | with the value of a deep and
more narrow area of expertise.
| | 00:57 | For example, my deepest training is
in psychology, but I have also studied
| | 01:02 | anthropology, research
methods, statistics, and design.
| | 01:05 | Interaction designers often have
what is described as T-shaped skills.
| | 01:10 | We have a broad range of skills, and
knowledge across several relevant fields,
| | 01:14 | and we go deep in one or two specific areas.
| | 01:18 | This allows us to understand
people and problems from different
| | 01:21 | perspectives, yet gives us the
knowledge and skills in specific areas to
| | 01:25 | solve those problems.
| | 01:27 | Specialized training and classes can
help us develop the specific skills
| | 01:30 | necessary to use our software tools,
understand the technology for which we are
| | 01:35 | designing, improve our written and
verbal communication skills, work efficiently
| | 01:40 | on teams, and participate
effectively in research.
| | 01:43 | Preparing for and maintaining a
career in interaction design might involve
| | 01:47 | training in many of these academic areas.
Our work spans such diverse and varied
| | 01:53 | fields and projects that we should
think about what knowledge and skills would
| | 01:57 | be relevant for the products
and experiences we want to create.
| | 02:01 | There is no single path to a
career in interaction design.
| | 02:05 | There are many ways to contribute, but
in the end, we can think about our skills
| | 02:09 | and knowledge helping us understand
three things: people, technology, and design.
| | Collapse this transcript |
|
|
4. Five Essential Principles of Interaction DesignConsistency| 00:00 | There are many topics we could include
in discussions of interaction design, from
| | 00:04 | mental models, to the sent of
information, to direct action.
| | 00:08 | If we're going to establish a foundation
for solid interaction design practices,
| | 00:12 | we need to focus on a few core
principles that can be used inform and guide
| | 00:16 | nearly all of our design decisions.
| | 00:18 | These five essential principles
of interaction design, consistency,
| | 00:23 | perceivability, learnability,
predictability, and feedback, help focus us on
| | 00:29 | crafting better solutions, experiences,
and designs. Let's take a deeper look at
| | 00:34 | each of these principles,
starting with consistency.
| | 00:37 | We are wired to be sensitive to
change. Changes attract our attention.
| | 00:42 | Think about camouflage; as long as the
person or animal remains still, we cannot
| | 00:47 | see them, but as soon as they move, and
their appearance changes relative to their
| | 00:51 | surroundings, their presence becomes obvious.
| | 00:54 | The same thing can happen across the
pages or screens of a digital experience.
| | 00:59 | As long as persistent elements
remain in the same place, retain the same
| | 01:02 | appearance, and adhere to the same
grid layout and proportions, we do not
| | 01:06 | direct attention toward them until we
need them. But when elements move and
| | 01:10 | change appearance without purpose
across pages or screens, it becomes
| | 01:14 | immediately noticeable.
| | 01:16 | Consistency applies not only to
appearance and placement, but also to behavior.
| | 01:21 | When a feature behaves differently
under similar conditions, it can cause
| | 01:25 | confusion, or when the same outcomes are
achieved through different interactions,
| | 01:30 | it forces people to learn
multiple ways to complete their tasks.
| | 01:34 | If people are asking why something is
the way it is, or why it is different,
| | 01:39 | then they've been distracted by the interface.
| | 01:41 | When designs are consistent in
appearance and behavior, people are able to focus
| | 01:46 | on their tasks, and they're not
distracted by surprising or unexpected changes.
| | Collapse this transcript |
| Perceivability| 00:00 | If people are not aware that the
opportunity to interact exists, we should not
| | 00:04 | be surprised when they do not interact.
| | 00:07 | Hidden interactions decrease usability
and efficiency. People should not need
| | 00:11 | to search for opportunities to interact.
They should not guess when interacting,
| | 00:16 | due to confusion or desperation.
| | 00:18 | We should be able to review an
interface, and identify where we can interact.
| | 00:22 | Interaction should not depend
on luck or random discovery.
| | 00:26 | Although much of our work is visual in
nature, we need to remember that some
| | 00:30 | people experience interfaces and
devices differently. Do not hide important
| | 00:34 | content and functionality behind
invisible interactions. Provide hints and
| | 00:39 | indicators; visual cues, such as
buttons, icons, textures, and even different
| | 00:44 | textiles let people know that
this may be clicked or tapped.
| | 00:48 | Meaningful labels help people using
screen readers differentiate between
| | 00:52 | content and interactivity.
| | 00:54 | The good news is that people are
click or tap happy; they will attempt to
| | 00:58 | interact with anything they think
may produce a result or opportunity.
| | 01:01 | Often those interface elements have a
different appearance from the rest of the
| | 01:05 | interface, or they
have perceived affordances;
| | 01:08 | that is, the interface element
has characteristics of real objects.
| | 01:12 | Buttons look pressable, because
they resemble real, physical buttons.
| | 01:17 | Now, we've been talking about
displaying opportunities to interact in terms of
| | 01:21 | visibility, but we really should
be thinking about this in terms of
| | 01:25 | perceivability. We should always be
considering accessibility, and some people
| | 01:29 | may be using the interface by
voice, sound, or touch; not vision.
| | 01:34 | The important point is that no matter
how people are sensing and perceiving
| | 01:38 | the interface, if they do not recognize the
opportunity to interact, they will not interact.
| | 01:44 | We often speak in terms of visibility,
because so much of what we do in our
| | 01:48 | day to day design work is
based on the visual appearance.
| | 01:51 | Although we often talk about
providing visual cues for interaction
| | 01:55 | opportunities, we should be
thinking in a broader way.
| | 01:58 | How do we communicate the opportunity
to interact through multiple senses?
| | 02:02 | What does a button look like, feel
like; sound like? How do we make sure that
| | 02:08 | everyone can perceive the
opportunity to interact?
| | 02:11 | Good interaction design
must go beyond the visual.
| | Collapse this transcript |
| Learnability| 00:00 | Interactions should be easy
to learn, and easy to remember.
| | 00:03 | Ideally, people should be able use an
interface once, learn it, and remember it forever.
| | 00:08 | Practically, people often need to use
an interface at least a few times before
| | 00:12 | they learn it, and then we hope that
they will remember what they have learned.
| | 00:16 | Even easy to use interfaces
require some degree of learning.
| | 00:20 | When we say that an interface is
intuitive, we really mean that it can be
| | 00:24 | learned quickly and easily.
| | 00:26 | The more we use an interface, and the
more we learn, the easier it becomes.
| | 00:31 | We can also take advantage of the
transfer of skill or knowledge. People bring
| | 00:35 | their experiences with them, and they
will attempt to interact with an interface
| | 00:39 | based on their experiences
with other similar interfaces.
| | 00:42 | As long as people perceive the
similarities among interfaces, they will transfer
| | 00:47 | and apply what they have learned elsewhere.
| | 00:49 | This is why design patterns
and consistency are so important;
| | 00:54 | people do not need to relearn what is
familiar, and what they already know.
| | 00:58 | It's faster to apply existing
knowledge than to learn something new.
| | 01:02 | We'll talk more about design patterns soon.
| | Collapse this transcript |
| Predictability| 00:00 | Good interaction design should set
accurate expectations about what will happen
| | 00:05 | before the interaction has occurred.
| | 00:07 | We should be able to show people an
interface and ask, before they interact, what
| | 00:11 | can you do here? Where can you
interact with this? What will happen if you do
| | 00:15 | that? What will be the result, or outcome?
| | 00:19 | If the opportunity to interact is
perceivable, if the context is meaningful and
| | 00:23 | sets the correct expectations, and if
the outcomes are predictable, then people
| | 00:27 | can answer those questions correctly.
| | 00:30 | We can set context and expectations by
either demonstrating what can be done,
| | 00:34 | such as animations, video, or overlays, or
by describing what can be done, such as
| | 00:40 | providing examples or instructions.
| | 00:42 | Providing previews of interactions and
outcomes helps people understand the
| | 00:46 | functionality and the constraints,
as well as the outcomes or results.
| | 00:51 | When people know what they can do, and
what will happen, they will interact
| | 00:54 | with the elements that are necessary to
complete their task, and accomplish their goals.
| | 00:59 | If we observe people interacting with
an interface or device, we can often
| | 01:03 | determine if they understand it, and
are able to predict outcomes or not.
| | 01:07 | Random interactions, guesses, trial
and error, failure to make consistent
| | 01:12 | progress toward a goal, and even the
failure to interact at all, typically
| | 01:16 | mean that the opportunities for interaction
are not perceivable, meaningful, or predictable.
| | 01:22 | And yes, behavior with a device or
interface differs between when we are task
| | 01:27 | focused, versus playing games.
| | 01:29 | Predictability is most
important when people are task focused.
| | 01:33 | Although we appreciate surprise and
mystery when playing, remember that in these
| | 01:37 | cases, unpredictability is
intentional, and an important part of the game.
| | Collapse this transcript |
| Feedback| 00:00 | Feedback provides acknowledgment of our
interactions, and information about their outcomes.
| | 00:05 | We use feedback to understand where we
are, our current condition or status,
| | 00:10 | what we can do next, and even
to know when we are finished.
| | 00:13 | Feedback comes in many forms: selected
states, highlights, dialogs, tool tips,
| | 00:18 | confirmation and error messages,
sounds, page refreshes, content updates,
| | 00:23 | etcetera. And it can be subtle, such as
breadcrumbs that tell us where we are on
| | 00:27 | a Web site, or obvious and impossible to
miss, such as a 404 page not found error.
| | 00:33 | Feedback should complement
the experience, not complicate it.
| | 00:37 | Do not interrupt people when they are
engaged in a task, and don't withhold
| | 00:41 | feedback when the information
may be necessary to proceed.
| | 00:45 | Provide feedback when people need it.
| | 00:48 | Feedback should be noticeable and meaningful.
| | 00:51 | Failing to acknowledge an interaction,
or providing feedback that is not noticed,
| | 00:55 | can lead to unnecessary
repetition of actions, mistakes, and errors.
| | 01:01 | There is a difference
between mistakes and errors.
| | 01:04 | A mistake is an incorrect choice, but
it does not always result in an error.
| | 01:08 | Feedback should confirm the
interaction, and the outcome, and when
| | 01:12 | interactions are important,
| | 01:14 | the action should be verified, and
people should have the opportunity to correct
| | 01:18 | or undo possible mistakes.
| | Collapse this transcript |
| How the principles form a system| 00:00 | These five essential principles all
work together in a system, tied together by
| | 00:05 | observation, interaction,
understanding, and the transfer of knowledge.
| | 00:10 | When available interactions are
perceivable, and noticeable, and when their
| | 00:14 | outcomes can be accurately predicted,
people will interact with the interface.
| | 00:19 | When meaningful feedback is
provided after an interaction, people will
| | 00:23 | understand how their
actions led to the outcomes.
| | 00:27 | When people understand the feedback
from their interactions, and they learn how
| | 00:31 | the interface works, with continued
practice and observation of the interface,
| | 00:35 | their learning becomes stronger.
| | 00:37 | Once people have learned how an
interface works, they are able to transfer that
| | 00:41 | knowledge and skill to other similar interfaces.
| | 00:44 | As long as the interfaces are consistent
within themselves, and across related or
| | 00:48 | similar experiences, people will be able
to apply what they have learned, and they
| | 00:52 | will interact more efficiently and effectively.
| | 00:55 | As we craft increasingly complex
designs for a growing variety of digital
| | 01:00 | devices, remember that interaction
design is not about the behavior of the
| | 01:04 | interface; it's about the behavior of
people. This course will provide a way to
| | 01:08 | approach interaction design by
combining design methods with psychological
| | 01:13 | principles, and in the lessons ahead,
we'll be diving more deeply into these
| | 01:17 | principles, and exploring how
to apply them to our designs.
| | Collapse this transcript |
|
|
5. Understanding Context and MotivationUnderstanding the context of experience| 00:00 | When designing an interaction, we need to
know more than just what information to
| | 00:04 | display and collect.
| | 00:05 | We need to understand what people are
trying to do, how they may try to do it,
| | 00:09 | and what might they interfere
with, or facilitate, their actions.
| | 00:12 | For example, I might log in to my bank
account at home on my laptop to check my
| | 00:17 | balance, to confirm that a transaction
has posted, or I might log in while in line
| | 00:22 | at the grocery store to see if I
have enough money in my account.
| | 00:25 | In both cases, I want to know my
account balance, but I need that information
| | 00:29 | for different reasons, at different
times and locations, by using different
| | 00:34 | devices, and with different urgency.
| | 00:36 | I can be more casual, and may seek
additional details at home, but I'm likely to
| | 00:40 | be more focused on a single piece
of information at the grocery store.
| | 00:44 | Retrieving the same piece of
information -- my account balance -- may have different
| | 00:49 | design solutions once the designer understands the
motivation for, and the context of, my behavior.
| | 00:56 | We use context scenarios to define the
situation, the people, and their needs, so
| | 01:01 | that we can create interaction
designs that will facilitate their behavior.
| | 01:05 | There are several questions we ask
to help us create context scenarios.
| | 01:10 | What is the situation? What's the
setting or environment in which the interface
| | 01:14 | or the device will be used?
| | 01:15 | Is it public or private? Is it conducive?
| | 01:18 | Who will be using the device or interface?
| | 01:21 | Will it be used by one
person, or multiple people?
| | 01:24 | If multiple people are using it, will
it be shared, or will they take turns?
| | 01:29 | How long will the device
or interface will be used?
| | 01:31 | Will it be for brief, moderate,
or extended periods of time?
| | 01:35 | Will the person be able focus on their task,
or will they be interrupted while using it?
| | 01:40 | Will they be interrupted rarely,
occasionally, or frequently?
| | 01:45 | Does the experience need to be extremely
simple? How much complexity can be accepted?
| | 01:51 | What are the person's needs and goals? What
are they trying to accomplish or complete?
| | 01:57 | What's the expected outcome, or
result, of using the device or interface?
| | 02:02 | What's the urgency of the goal or need?
| | 02:05 | Is it required immediately, or can it be
done in a more relaxed and casual manner?
| | 02:10 | Only by understanding what people need
and expect, and the circumstances under
| | 02:14 | which they will engage with the device
or interface, can we design interactions
| | 02:18 | that will lead to success.
| | Collapse this transcript |
| Understanding need and motivation| 00:00 | Motivation is the force that
initiates, sustains, and directs behavior.
| | 00:05 | It drives everything we do, from eating, to
social interactions, to creativity, and exploration.
| | 00:10 | We can better understand how a
person might behave by understanding their
| | 00:14 | motivation, and what they need.
| | 00:16 | There are several ways we
can describe motivation.
| | 00:19 | Sometimes people are motivated for
external reasons, such as attention, fame, or
| | 00:24 | money, and sometimes people are
motivated for internal reasons, such as
| | 00:28 | curiosity, competition, or being helpful.
| | 00:32 | When describing motivation this way, we
often talk about the locus of control.
| | 00:36 | Intrinsic motivation is when behavior
is driven by internal factors, and
| | 00:41 | people feel in control.
| | 00:43 | They are doing something because they want
to, and because it makes them feel good.
| | 00:47 | Extrinsic motivation is when behavior is
influenced by external factors, such as
| | 00:52 | earning a salary, getting good grades, or
receiving other, often tangible, rewards.
| | 00:58 | When people are intrinsically motivated,
they feel focused, engaged, and interested.
| | 01:03 | When people are extrinsically motivated,
they're more likely to lose interest in
| | 01:07 | the activity if the rewards or
external factors are removed.
| | 01:11 | Some of the most successful experiences
and interactions are those where people
| | 01:15 | are intrinsically motivated,
because people really want to be engaged.
| | 01:20 | David McClelland describes motivation
in terms of three needs: achievement,
| | 01:25 | affiliation, and power.
| | 01:27 | The need for achievement is characterized
by the need to learn and solve problems.
| | 01:32 | The need for affiliation is
characterized by our need for family and
| | 01:36 | social interactions.
| | 01:38 | The need for power is characterized by
our drive for recognition and status.
| | 01:44 | These motives, or drives, influence
our behavior, and different people have
| | 01:48 | different levels of each.
| | 01:50 | Some people are motivated more by
social relationships, some by reaching
| | 01:54 | difficult goals, and some by
persuading and influencing others.
| | 01:59 | Victor Vroom describes motivation in
terms of three factors: expectancy,
| | 02:04 | instrumentality, and valence.
| | 02:06 | We can explain each of these as simple
statements we might make to ourselves.
| | 02:11 | For expectancy, if I put forth the
effort necessary, then I will perform well.
| | 02:17 | For instrumentality, if I perform well,
then I will receive an outcome. And
| | 02:22 | for valence, if I value the outcome, then I will
be motivated to put forth the effort necessary.
| | 02:29 | There will be little or no motivation
if we expect to fail or perform poorly,
| | 02:35 | if there will be no outcome, or the
outcome is not related to our effort, and if
| | 02:40 | we do not value the outcome, or the results.
| | 02:44 | People are more motivated, and will
actually accelerate their behavior, as they
| | 02:48 | get closer and closer to their goals.
| | 02:51 | This is called the goal-gradient effect,
and highlights the importance of helping
| | 02:55 | people understand their status and progress.
| | 02:58 | If we do not know how close we are to
our goal, or if the endpoint keeps changing,
| | 03:03 | then our motivation may decrease.
| | 03:06 | We can help people feel like they're
making progress by awarding points, or
| | 03:09 | showing the percentage of completion.
| | 03:12 | Membership programs and frequent buyer
cards -- such as buy ten, get one free -- can
| | 03:17 | give people clear goals to work toward,
and an easy way to track their progress.
| | 03:21 | Progress meters, such as file
transfer status, or the percent complete of
| | 03:26 | an online profile, clearly communicate how
much has been finished, and how much remains.
| | 03:31 | Remember, motivation may be lower at
the start of a new goal, or just after
| | 03:36 | completing a goal, because we are
far from completing our next goal.
| | 03:41 | To help people remain engaged, use
personalized contact and encouragement,
| | 03:46 | help them define their next goal, and
maybe even give them a break before helping
| | 03:50 | them start toward it.
| | 03:52 | There are many theories about
motivation, and ways to describe it, but they all
| | 03:56 | provide a framework for understanding
and describing what makes people interact,
| | 04:00 | what keeps them focused,
and what keeps them going.
| | 04:04 | Understanding motivation helps us better
define the context scenarios by helping
| | 04:08 | us understand the situation, the needs,
the expectations, and the urgency.
| | Collapse this transcript |
| Designing to meet needs| 00:01 | Context scenarios and motivation help
us understand why people may interact, but
| | 00:06 | now we need to actually engage
people, and we do this by establishing
| | 00:10 | credibility, and earning trust,
| | 00:12 | knowing when people need to interact
across multiple devices and locations,
| | 00:17 | delivering what they need when they
need it; making the experience only as
| | 00:22 | simple as it needs to be.
| | 00:24 | Credibility is essential for engaging
people. If they believe that your device
| | 00:28 | or interface will benefit them, and that
you will not deceive or take advantage
| | 00:32 | of them, then they're more
likely to interact with it.
| | 00:35 | The key factors in establishing
credibility on the Web, and in Web applications,
| | 00:39 | are expertise, trustworthiness,
and visual appeal, or visual quality.
| | 00:44 | Expertise can be established by offering
accurate content, and relevant services.
| | 00:49 | Trust can be established by making it
easy to contact you, keeping your content
| | 00:54 | current, and avoiding errors.
| | 00:56 | And visual quality can be established
with a professional appearance, strong
| | 01:01 | usability, and accessibility.
| | 01:03 | People may not complete their
interactions in a single visit or session, and
| | 01:07 | they may even have experiences that
span multiple devices and locations.
| | 01:12 | eBook services are designed to allow
people to seamlessly read their books
| | 01:16 | across multiple devices by remembering
not only what ebooks they own, but which
| | 01:21 | page they were last reading, so that no
matter when or where they open that book
| | 01:25 | again, it will be on the last page they viewed.
| | 01:28 | We can think about place shifting in
terms of situation, attention, and urgency.
| | 01:34 | For the situation, what is the
location and the conditions where someone is
| | 01:38 | interacting with the device or interface?
Are they at home, in a private place, or
| | 01:42 | in a cafe, or other public place?
| | 01:45 | Will people be able to focus their attention?
Are they multitasking, or likely to be distracted?
| | 01:50 | Do they need specific information
urgently, or can they take the time to look
| | 01:54 | for deeper, more complete information?
| | 01:57 | People need different information
and functionality at different times.
| | 02:01 | Traditionally, devices and interfaces
have presented often complex and deep
| | 02:06 | menus at all times,
| | 02:08 | so that people could choose and
specify what they want at any given time. But
| | 02:12 | this approach means that people need
to stop and think about where they are,
| | 02:16 | what they're doing now, and what they
want to do next, then look through a series
| | 02:20 | of menus lists, and options to find
their choices, and decide which is the best
| | 02:25 | match for their next activity.
| | 02:27 | This interrupts their attention, their
focus, and the flow of their interaction.
| | 02:32 | The effort put into thinking is called
cognitive load, and we'll talk more about
| | 02:36 | that later in this course.
| | 02:38 | In most situations, there are only a
few things that people are likely to
| | 02:42 | want or need to do next,
| | 02:43 | so it's not necessary to present
all of the choices, all of the time.
| | 02:48 | Many devices and interactions avoid
the effort necessary to craft
| | 02:52 | context-sensitive designs by shifting the
decision-making back to the person, and this
| | 02:57 | increases their cognitive load.
| | 02:59 | It's easier to design a device or
interface that forces people to specify what
| | 03:04 | they need than to design a device or
interface that anticipates their needs, and
| | 03:09 | presents the appropriate choices,
| | 03:11 | but making people work harder to use
a device or interface is not our goal.
| | 03:16 | For example, when I go to the ATM to
make a deposit, I have to specify the
| | 03:21 | action, a deposit, the
destination account, checking our savings,
| | 03:25 | and the type of deposit, I have a
check, or multiple checks, or cash.
| | 03:30 | A smarter system wouldn't
interrupt me with so many choices, and would
| | 03:33 | streamline my interaction.
| | 03:35 | I still need to dip my card, and log in,
and I still need to specify that I want to
| | 03:39 | make a deposit, but we could use smart
defaults based on my past behavior to
| | 03:44 | preselect my checking account if
that's where I make most of my deposits, and
| | 03:48 | use design to make it easier
to select a different account.
| | 03:52 | The bank could rely on the machine
to identify multiple checks, and cash, and
| | 03:56 | we could use design to
remind people, don't deposit coins.
| | 04:01 | I could complete my task in two fewer steps
if the design of the experience were smarter.
| | 04:06 | Although we nearly always strive for
simplicity in our designs, there are times
| | 04:11 | when we want to retain a bit
of complexity in the experience.
| | 04:14 | We want to encourage people to take
the time to think about their actions, to
| | 04:18 | help them better understand the
information and process, and to reduce the
| | 04:22 | probability of errors.
| | 04:24 | This is not to say that we want to
design intentionally complex experiences,
| | 04:29 | unless we're designing
challenging games and puzzles, but sometimes
| | 04:33 | oversimplification can lead to more
mistakes, and a poorer understanding of the
| | 04:37 | information and functionality.
| | 04:40 | When do we want to be careful about
oversimplifying? When errors might be
| | 04:44 | serious, or difficult to undo, or
when mistakes may go unnoticed.
| | 04:49 | We should be careful about using
defaults, and designing an interaction that is
| | 04:53 | too smart, when it can lead to more
errors, or a lack of understanding.
| | 04:57 | For example, online banks and
brokerages need to ensure customers do not make
| | 05:02 | mistakes while managing their portfolios.
| | 05:04 | Every stock, bond, or mutual fund
purchase is a multistep process, where the
| | 05:09 | investor must choose the position,
specify the amount to purchase,
| | 05:13 | verify the position and the
amount, and then submit the order.
| | 05:16 | There is no one click investment
option, because a mistake might cost
| | 05:21 | significant amounts of money.
| | 05:23 | Banks and brokerages understand
the value of information when making
| | 05:26 | investment choices,
| | 05:28 | so although the process could be made simpler,
it's not in the best interest of the
| | 05:33 | investor to make it simpler, because
mistakes and errors have a high cost.
| | 05:38 | By establishing trust, accommodating,
place shifting, being context-sensitive,
| | 05:43 | and striving for simple, smart
experiences, we can create interfaces and devices
| | 05:49 | that will truly meet people's
needs, and minimize their effort.
| | Collapse this transcript |
| Persuasive design| 00:00 | Understanding context and what
motivates people does not guarantee that
| | 00:04 | they will interact.
| | 00:05 | Sometimes a little persuasion can help
get them started, and we can use what is
| | 00:09 | called persuasive design.
| | 00:11 | We can encourage people to interact
with a device or interface by appealing
| | 00:15 | to their emotions, establishing trust, and
incorporating these components of persuasion.
| | 00:21 | Reciprocity means that if people feel
that your device or interface has done
| | 00:25 | something for them, they may return
the favor by sharing, returning to the
| | 00:30 | site, registering, purchasing, or otherwise
committing to future engagement and interaction.
| | 00:36 | People like to tell others about
truly great experiences. People behave more
| | 00:41 | consistently and predictably when they
have made a commitment to do something.
| | 00:45 | Sites that require people to identify
themselves to make comments and contribute
| | 00:50 | content have fewer
useless comments and flame wars.
| | 00:54 | Anonymity does not encourage commitment,
but if people take the time to identify
| | 00:58 | themselves, and commit to worthwhile
contributions, they're more likely to follow
| | 01:03 | through with positive comments.
| | 01:05 | People tend to conform, and we're often
likely to do what everyone else is doing.
| | 01:10 | We observe what others are doing, then copy them.
| | 01:13 | Trends and fads influence the
behavior of many people, which is why so many
| | 01:18 | people are interested in them.
| | 01:20 | People obey, follow, and
believe authority figures.
| | 01:24 | When we are uncertain, we seek
authorities to provide guidance, and
| | 01:27 | recommendations, and to serve as models.
| | 01:30 | Influential and respected people have
many followers on social networks, because
| | 01:35 | they serve as models for them.
| | 01:38 | People are more likely to listen to,
and believe, people they like, and with
| | 01:42 | whom they are similar, which is why our
social networks are trusted sources of information.
| | 01:48 | Value and urgency increase as
something becomes more scarce. If we believe
| | 01:54 | that many others may be interested in
the same thing, then we're more likely
| | 01:57 | to make a choice, and act upon it, because
competition increases the likelihood of behavior.
| | 02:03 | Limited availability, demand, and
competition are powerful external motivators.
| | 02:09 | Remember that motivation is the force that
initiates, directs, and sustains behavior.
| | 02:15 | These components of persuasion help
engage people by enhancing motivation, and
| | 02:20 | they can make the difference when
people are choosing to interact.
| | Collapse this transcript |
|
|
6. Principles of Interface StructureGestalt principles| 00:00 | Studying how we perceive the world
around us has long been an area of interest
| | 00:04 | for artists, designers, and scientists.
| | 00:07 | In 1910, Max Wertheimer, a German
psychologist, noticed that a series of blinking
| | 00:12 | lights creates the illusion of motion.
| | 00:14 | Theater marquees take advantage of
this illusion, known as the phi effect, to
| | 00:18 | create the racing lights
encircling the name of the play or movie.
| | 00:22 | Modern design software uses a similar
effect, often called marching ants, to
| | 00:27 | highlight a marquee drawn to
outline an area of an image.
| | 00:30 | Wertheimer and his colleagues, Kurt
Koffka and Wolfgang Kohler, studied perception
| | 00:36 | for the next several years, and in
the 1920s, Gestalt psychology emerged.
| | 00:40 | The term Gestalt is a German word
that means shape or form, and the Gestalt
| | 00:45 | principles and laws describe how we
perceive the world around us as meaningful
| | 00:49 | and complete objects, with a clear
distinction between foreground and background.
| | 00:55 | We perceive whole objects; not
a series of independent parts.
| | 00:59 | There are several Gestalt Laws that
describe how we organize our perceptual
| | 01:03 | experiences, and we can use these
organizational principles to create designs
| | 01:07 | that are more meaningful,
and easier to perceive.
| | 01:10 | Good design, with solid perceptual
structure, actually makes it easier for people
| | 01:15 | to understand and interact with the interface.
| | 01:18 | Before we describe the Gestalt Laws,
we need to define two fundamental
| | 01:22 | organizational principles:
Figure-Ground, and the Law of Pragnanz.
| | 01:27 | Figure-Ground describes how we organize
our perceptions in terms of foreground
| | 01:31 | objects or figures, which are clearly
defined, such as a tree, and the background,
| | 01:37 | which may be unbounded or
vaguely defined, such as the sky.
| | 01:41 | The Law of Pragnanz describes how we
organize our perceptions into the simplest
| | 01:46 | possible experience.
| | 01:47 | We will interpret ambiguous, vague, or
complex objects in the simplest possible way.
| | 01:53 | The Law of Pragnanz is also sometimes
called the Law of Good Figure, or the Law
| | 01:57 | of Simplicity, which make perfect sense.
| | 02:00 | Why would our brains expend the
effort necessary to process overly complex
| | 02:05 | perceptions, when it easier and faster
to perceive things in a simpler way?
| | 02:10 | The Gestalt Laws help us describe
and understand how we arrive at this
| | 02:14 | perceptual simplicity, but they
also help us understand why vague or
| | 02:18 | ambiguous shapes and images can be
hard to understand, and may even lead to
| | 02:22 | perceptual illusions.
| | 02:24 | The Law of Proximity states that
objects near one another in space or time are
| | 02:29 | perceived as being a
group, and belonging together.
| | 02:31 | The Law of Similarity states that
objects with similar characteristics, such as
| | 02:36 | form, color, size, and brightness,
are perceived as belonging together.
| | 02:41 | The Law of Closure explains why incomplete
figures are perceived as complete or whole.
| | 02:48 | The Law of Common Fate describes how
objects moving together are perceived
| | 02:53 | as belonging together.
| | 02:55 | The Law of Continuity states that
objects aligned along a line or curve are
| | 03:00 | perceived as belonging together,
| | 03:02 | and we will perceive the simplest,
smooth path, rather than a complex path. And
| | 03:08 | the Law of Symmetry explains our
tendency to perceive symmetric objects as
| | 03:12 | figures on a background.
| | 03:14 | People show a preference for symmetry,
but this law also explains perceptions
| | 03:19 | that exhibit multistability;
| | 03:21 | situations where our brain flips back
and forth between two distinct perceptions,
| | 03:26 | such as this Rubin vase.
| | 03:29 | The Gestalt Laws can be used to
describe how we perceive much of the structure
| | 03:32 | and meaning in digital interfaces.
| | 03:35 | Looking at this Web page, we
can find examples of each law.
| | 03:40 | These tabs and this checkout
information form two distinct groups, because of
| | 03:45 | the Law of Proximity.
| | 03:47 | These tabs and the navigational
system strengthen our perception of groups,
| | 03:51 | because they have similar characteristics.
| | 03:55 | We perceive this rectangle for the
coupon, as a whole rectangle, even though it
| | 04:00 | is clearly obstructed, and
we cannot see all of it.
| | 04:04 | The use of the dashed stroke in
the product grid relies on the Law of
| | 04:08 | Continuity for us to perceive this as a solid
line, and our eyes follow along that dotted path.
| | 04:16 | And when we choose to add an item to the
cart, we receive feedback in the form of
| | 04:21 | a mini cart, and it slides down as a
drawer, and we perceive that as a group
| | 04:26 | because all of those pieces of
information move together; the Law of Common Fate.
| | 04:32 | Remember, the Gestalt Laws has
describe how and why we perceive the world as
| | 04:37 | filled with whole distinct objects.
| | 04:39 | They're not rules for how to design.
When we encounter devices or interfaces
| | 04:44 | that are vague, ambiguous, and
difficult to understand, we can use the Gestalt
| | 04:48 | Laws to help us identify the source
of the confusion, then develop ways to
| | 04:52 | improve the design, and ensure that
people are perceiving the device or interface
| | 04:57 | in a clear and understandable way.
| | Collapse this transcript |
| Designing with grids| 00:00 | Grids have a long history in print
design, but they've not always been employed
| | 00:04 | for the design of digital interfaces.
| | 00:06 | There are some key differences between print
and digital design that affect how we use grids.
| | 00:11 | The printed page has fixed
dimensions, yet a digital screen allows us to
| | 00:15 | dynamically change the size of the content area.
| | 00:18 | We can scroll vertically and horizontally.
| | 00:20 | We can scale the screen by zooming
in and out, and we can look at the same
| | 00:24 | information on many different screen sizes.
| | 00:27 | This means the digital screens
require flexibility with grids that the
| | 00:31 | printed pages do not,
| | 00:33 | and as designers, we need to think about
how information and functionality will
| | 00:37 | be displayed across many screens.
| | 00:39 | There are many different grid systems
and methods for using grids in design,
| | 00:43 | from counting columns, to ratio-based
grids, but all grid systems lend structure
| | 00:48 | and consistency to the layout.
| | 00:50 | Let's take a brief look at the major types.
| | 00:52 | Column-based grid systems often start
with a specific screen width, such as 960
| | 00:58 | pixels, then divide the screen up into
multiple columns, such as 12 or 16 columns,
| | 01:04 | often a number divisible by three or four,
and with a gap or gutter between those columns.
| | 01:10 | Ratio-based grid systems take a
starting screen width, and divide it into
| | 01:14 | columns based on mathematical formulas, such
as the Rule of Thirds or the Golden Ratio.
| | 01:20 | These grids are often selected for
their column sizes that mirror natural
| | 01:24 | proportions for a sense of balance.
| | 01:26 | Since these grids are proportionate,
they can be applied to nearly any screen
| | 01:30 | size, and retain their structure.
| | 01:33 | Responsive grid systems are used when
we need to create responsive design that
| | 01:37 | adapt as screen sizes change, and we
often need to define multiple states of the
| | 01:41 | flexible grid system, and how
it will respond to scaling.
| | 01:45 | For example, four columns of Web content
on a laptop might become two columns on
| | 01:50 | a tablet, and only one column on a
mobile phone, and the width of the columns
| | 01:54 | might have minimum and maximum values
to accommodate different screen sizes.
| | 01:59 | When using a grid system, content and
images may span just one, or several, columns.
| | 02:05 | This gives us the opportunity to
create a more flexible structure, without
| | 02:09 | requiring all content to fit into
the same column widths in all places.
| | 02:14 | It's okay to allow different content
elements to span a different number of columns,
| | 02:18 | as long as we create and adhere to
sensible rules that guide the use of the grid
| | 02:23 | across all pages and screens.
| | 02:26 | Why should we use grids when designing?
| | 02:28 | Well, grids provide structure and balance.
Keeping content and images aligned to
| | 02:34 | the grid makes it easier to scan and
read the screen, and it gives the design a
| | 02:38 | more professional appearance, which is
important for trust and credibility.
| | 02:43 | This structure is also essential
for consistency; one of our five
| | 02:46 | interaction design principles.
| | 02:49 | Grid systems for screen interfaces
typically focus more on columns, and less on
| | 02:54 | rows, because page and screen length
vary greatly with the amount of information
| | 02:58 | being displayed, and as screen sizes
change, the pages can become even longer. We
| | 03:04 | also need to remember that interfaces
often have a different appearance on
| | 03:07 | different devices, which can affect
the way information is displayed.
| | 03:11 | For example, different computers may
render font faces differently, making them
| | 03:16 | appear larger or smaller, and
even wrapping the texts differently.
| | 03:20 | Since content varies, and may be
displayed differently, it's often difficult to
| | 03:24 | maintain a tight layout that locks
the content to horizontal grid rows.
| | 03:28 | Unless we know that the content will
have a fixed height, such as images in
| | 03:32 | a photo album, we should plan for content to
flow flexibly within the columns of the grid.
| | 03:38 | We often need to build grids around
constraints, such as banner ads, mandatory
| | 03:43 | image sizes, and nonwrapping
headers with maximum character accounts.
| | 03:48 | When we have objects with fixed
dimensions, we often start defining our grid
| | 03:52 | system with those constraints in mind,
and we create columns that are equal, or
| | 03:56 | add up, to the fixed dimensions.
| | 03:59 | Finally, the grid system not only
provides structure and rhythm for our
| | 04:03 | design, but it also facilitates the
development process, because adhering to
| | 04:08 | a grid provides a framework for the
underlying code for the interface, and
| | 04:11 | makes it easier to keep all of the
content and images on the screen aligned
| | 04:15 | and proportionate.
| | Collapse this transcript |
| Guiding visitors with sequence, steps, and structure| 00:00 | When people see content presented in
an orderly way, and functionality located
| | 00:05 | in the same or similar places, which
produces consistency, they learn how to
| | 00:09 | look for other content and
functionality on other screens or pages, which
| | 00:13 | enhances learnability,
| | 00:14 | and their eyes and hands go to
those locations more readily, which
| | 00:17 | demonstrates predictability.
| | 00:19 | So far, we've been talking about
structure mostly in terms of spatial location:
| | 00:24 | where it's located in the interface,
but we can also provide structure in terms
| | 00:28 | of time: the sequence or
steps by which something occurs.
| | 00:32 | We need to provide structure
in terms of both place and time.
| | 00:35 | How do we provide structure for time?
| | 00:38 | How do we guide people through an
experience that may span
multiple steps or screens?
| | 00:44 | First, we need to set expectations about
the process. How long will it take? How
| | 00:48 | many steps are there?
| | 00:50 | Second, we need to break down complex
processes into simpler, more manageable steps.
| | 00:56 | And the third, we need to design an
experience that provides feedback about
| | 01:00 | status and progress to lead
people forward through the interaction.
| | 01:04 | Remember, the goal-gradient effect?
People are more motivated to complete a task
| | 01:08 | the closer they get to the goal.
| | 01:11 | Some of the most common interface
methods for guiding people are using numbered
| | 01:15 | steps, estimating duration, and
indicating the percent complete.
| | 01:19 | Examples of step by step guidance
include checkout on shopping sites, account
| | 01:23 | registration, and building a
social profile on a social network.
| | 01:28 | Pagination controls are useful when
there is much content, such as search results,
| | 01:32 | and people want to rapidly
skip to a specific point.
| | 01:36 | Page numbers, the number of items per
page, and knowing how many items are
| | 01:40 | sorted and displayed, help people
understand where they are, and where they might
| | 01:44 | find something they need.
| | 01:47 | Previous and next controls are used
when there is a linear process, which takes
| | 01:51 | people through a complex
action one step at a time.
| | 01:54 | This conceals the complexity by
showing just one task or question.
| | 01:59 | People can move backward to modify
their previous choices and answers, but often
| | 02:03 | they can only move forward to
the next step, and not skip around.
| | 02:08 | Good typographic practices provide
excellent structure and guidance.
| | 02:12 | Headers, and subheaders, line height, or
leading, indentation, and bullet lists
| | 02:17 | help identify the key points.
| | 02:20 | Careful selection of font face, as well
as judicious use of text styles, helps
| | 02:24 | guide the eye when scanning, and makes
it easier to find important information.
| | 02:29 | Grids of content or images
provide both structure and implicit
| | 02:34 | sequence information.
| | 02:35 | As Westerners, we think of the top left
grid position as first, and the bottom
| | 02:40 | right position as last, but we might
also think of these grid sequences as
| | 02:45 | ranked from best to worst.
| | 02:48 | Remember, we bring our past
experiences with us, so what we have learned on
| | 02:53 | other Web sites, devices, and
interfaces influences our expectations and
| | 02:57 | predictions about how new
devices and interfaces work.
| | 03:01 | As designers, we can take advantage of
this by placing elements in places where
| | 03:05 | we know people are more likely to
look for them, and by arranging tasks in
| | 03:09 | steps that are familiar.
| | 03:10 | How do you narrow down a
large set of search results?
| | 03:15 | Do you enter your credit card number
at the beginning or end of the checkout
| | 03:19 | process when shopping online?
| | 03:21 | How do you skip to the next
song on your phone or music player?
| | 03:25 | We all have expectations about how to
do these things based on past experience,
| | 03:30 | and we'll talk about this again
when we discuss mental models.
| | 03:34 | What happens when we fail to meet
people's expectations and predictions?
| | 03:38 | What if we put things in unexpected or unrelated
places, or move them around from page to page?
| | 03:45 | What if we arrange tasks
in an unexpected sequence?
| | 03:49 | What if we present information in
random or poorly organized ways?
| | 03:53 | In all of these cases, we reduce
usability, reduce meaningfulness, increase
| | 03:58 | confusion, and increase the amount of time
it takes to understand and complete a task.
| | 04:04 | We also increase the probability that
people will abandon the interactions, and
| | 04:08 | your device or interface.
| | 04:10 | Structure and sequence are important,
not just for appearance, but also for
| | 04:15 | consistency, learnability, and predictability,
| | Collapse this transcript |
| Understanding design patterns| 00:00 | We're often faced with the same or
similar design problems repeatedly
| | 00:04 | across multiple projects.
| | 00:06 | Developers handle this by creating code
repositories, and reusable code snippets.
| | 00:11 | They're able to take previously
created code, modify it when necessary, and
| | 00:15 | apply it again on a different project to
perform a similar action, or solve a similar problem.
| | 00:20 | We might describe this as
not reinventing the wheel.
| | 00:24 | We can do something similar with
design patterns, which are optimal solutions
| | 00:29 | to recurring problems.
| | 00:30 | They help us solve similar
problems, consistently and efficiently.
| | 00:34 | We certainly learn from our past
experience, and we're able to solve similar
| | 00:38 | problems more quickly, but we could be
even more efficient if we were to leverage
| | 00:42 | a library of reusable interaction design
patterns that have already been defined
| | 00:47 | and categorized, and which are ready
to be applied to our current project.
| | 00:52 | We can take advantage of existing
design pattern libraries, and we might
| | 00:56 | even build our own,
| | 00:57 | especially if we work in a very
specialized field, like health care or finance.
| | 01:02 | It takes experience and time to
identify good design patterns, but when an
| | 01:07 | optimal solution is identified, it
can spread quickly through the design
| | 01:10 | community, and may appear in
many devices and interactions.
| | 01:14 | A good design pattern has an evolutionary path.
| | 01:18 | First it starts as an idea, or proposed
solution to a problem, and if it succeeds,
| | 01:23 | then it's remembered and reused.
| | 01:26 | Next, if other designers adopt that
solution, and they begin reusing it
| | 01:30 | successfully, it becomes a best practice.
| | 01:33 | We often borrow the best ideas from
others, and try to improve them, then when a
| | 01:38 | best practice is formally described,
recorded, and recommended as a preferred
| | 01:43 | solution, it becomes a convention.
| | 01:45 | And finally, when a convention is
formally adopted by professional organizations
| | 01:50 | as the way things are done,
it becomes a standard.
| | 01:55 | Design patterns must adapt and change,
because technology changes, people learn,
| | 02:00 | and grow, devices and situations
change, and our expectations change.
| | 02:05 | What works as an optimal solution
today may not be the optimal solution next
| | 02:10 | week, next month, or next year.
| | 02:13 | Would the design solutions for a
Web site in 1996 stand up to the requirements
| | 02:19 | of mobile devices with touchscreens, or the
spatial gestures of home video game systems today?
| | 02:25 | What are some examples of
familiar and successful design patterns?
| | 02:29 | You're probably already using interaction
design patterns without even realizing it.
| | 02:34 | Accordion panes make it possible to
show a lot of content in a limited space,
| | 02:38 | by dividing the content into multiple
sections, or panes, and showing only one
| | 02:43 | pane open at a time.
| | 02:45 | Carousels are often used when we
have many images, but not enough space to
| | 02:49 | arrange them in a grid,
| | 02:51 | so we show a few at a time in a
horizontally scrolling row, and hop-ups are
| | 02:56 | layers of content that appear over
the page to provide additional, or more
| | 03:01 | specific, information.
| | 03:03 | But not all patterns are good or optimal;
| | 03:05 | Anti-patterns are commonly reinvented
ineffective or bad solutions to design problems.
| | 03:13 | On the surface, they may seem like
a good solution, but in practice they
| | 03:17 | interfere, obstruct, or hinder the experience.
| | 03:21 | Some anti-patterns exist because
design solutions are copied from one problem,
| | 03:25 | and applied to a different problem
without being properly modified, or worse yet,
| | 03:30 | without even being the
correct solution to the problem.
| | 03:33 | Other anti-patterns exist because
a potentially relevant solution is
| | 03:37 | improperly applied.
| | 03:38 | For example, pogo sticking refers to
navigation that requires the user to drill
| | 03:45 | down, perform an action, navigate back
up, proceed to the next step or item,
| | 03:50 | then drill back down to
perform the action again.
| | 03:53 | Repeated up and down navigation to
perform the same repetitive task is common,
| | 03:59 | but very inefficient.
| | 04:01 | Idiot boxes interrupt the experience to
either ask people if they are sure they
| | 04:06 | want to perform an action that has
already been done, or to confirm an action
| | 04:10 | that has already provided
feedback in another way.
| | 04:13 | For example, after we drag and drop
something into a new location, we see it in
| | 04:18 | the new position, but then a dialog
hops up over the screen to tell us that we
| | 04:22 | successfully dragged
something to a new position.
| | 04:26 | Some patterns are even malicious. Dark
patterns are not mistakes, or poor design.
| | 04:32 | They are intentional design decisions
created to mislead, and misdirect us, to
| | 04:37 | make choices or perform
actions we would not otherwise do.
| | 04:42 | They take advantage of ambiguity,
inattention, forgetfulness, distraction,
| | 04:47 | and even fear and anxiety to direct us
toward actions that are not in our best interests.
| | 04:52 | Unfortunately, most of us
have encountered dark patterns.
| | 04:56 | Here is a few examples: warning
messages that tell us our computer has been
| | 05:01 | infected when it hasn't,
| | 05:03 | but clicking on the Fix It Now
button actually installs the Malware that
| | 05:07 | infects our system.
| | 05:09 | Ambiguously written text that makes it
impossible to determine what will happen
| | 05:13 | when we take action.
| | 05:14 | For example, am I subscribing or
unsubscribing to these e-mail announcements?
| | 05:21 | And ads that make it look like the
legitimate functionality of the Web site or
| | 05:25 | application, but trick people into
clicking on the ad, because they think they're
| | 05:29 | interacting with the site.
| | 05:32 | Many of us work in fields where we
encounter the same or similar design
| | 05:36 | problems regularly, but the solutions
are far more specific and specialized than
| | 05:40 | accordion panes or tool tips,
| | 05:42 | so we need to create our own pattern libraries.
| | 05:45 | Your design pattern library may be as
large and well-defined as you need, but
| | 05:50 | every pattern definition has
a few common characteristics:
| | 05:54 | the name of the pattern, a description
of the pattern, the context in which the
| | 06:00 | problem occurs, and the solution to the problem.
| | 06:04 | Some libraries also include additional
information to better define the design
| | 06:09 | pattern, such as identifying the
principles that underlie the solution, an
| | 06:13 | explanation of why the solution works,
and examples of the pattern in use.
| | 06:19 | Design pattern libraries can also
include reusable graphics, symbols, assets, and
| | 06:23 | styles, in addition to the
description of the solutions.
| | 06:26 | Just as developers have code snippets
and libraries to copy from, designers can
| | 06:31 | create and maintain design
libraries with reusable design elements.
| | 06:36 | Nearly all design software tools have
the ability to create symbols and styles,
| | 06:40 | and most designers maintain libraries
of assets and images, such as tabs and
| | 06:44 | buttons that can be leveraged.
| | 06:46 | Take advantage of your past work by
creating your own design pattern libraries,
| | 06:51 | and collections of assets, symbols, and
styles to facilitate the application of
| | 06:56 | those design patterns.
| | Collapse this transcript |
|
|
7. Navigation Best PracticesEffective navigation| 00:00 | Effective navigation, or moving around
within a Web site, application, or game, is
| | 00:05 | essential for findability,
and successful interactions.
| | 00:08 | Only the smallest Web sites and
applications consist of a single page or screen.
| | 00:12 | The vast majority have much more content
and functionality, spread across many pages.
| | 00:17 | People move among pages or screens three
main ways: the navigational system, the
| | 00:23 | search system, and through
contextual links in the body of the page.
| | 00:27 | All three of these must work together,
and they all rely on having an effective
| | 00:31 | organizational system.
| | 00:33 | The most common way to represent the
structure of a Web site is through a sitemap,
| | 00:37 | sometimes also called the site architecture.
| | 00:39 | This diagram shows how pages are
organized in the overall hierarchical
| | 00:43 | structure of the site.
| | 00:44 | Although there are many ways to get
to pages in a Web site, the sitemap only
| | 00:48 | illustrates the
navigational paths among the pages.
| | 00:52 | Sitemaps do not show all of the possible
links among all pages, because it would
| | 00:56 | become a very complex, and probably
illegible, diagram if we tried to show that.
| | 01:01 | The sitemap is best for capturing the
breadth, depth, and structure of the site.
| | 01:05 | But good structure and
organization is more than just a sitemap.
| | 01:10 | We also need to deeply understand
the content and information on the site,
| | 01:13 | and how it's organized.
| | 01:15 | Classifying and categorizing
information has traditionally been a more
| | 01:19 | specialized information architecture task, and
on very large Web sites, this is still the case.
| | 01:25 | Categorization is identifying the
groups or categories of information, and
| | 01:29 | classification is assigning the
information to the appropriate categories, and
| | 01:34 | then adding descriptive details.
| | 01:37 | We use what is called a taxonomy to
record the hierarchical categories.
| | 01:41 | A great example comes from biology,
where we all learn how to sort living things
| | 01:45 | into kingdom, phylum, class,
order, family, genus, and species.
| | 01:51 | Just as every creature on earth can
be classified with this system, all
| | 01:55 | information on a Web site can be
classified into structured categories,
| | 01:58 | and we represent that
information structure with a taxonomy.
| | 02:02 | For example, when shopping for shoes,
there is a hierarchical structure to the
| | 02:06 | information, and this is reflected in both
the sitemap, and the navigation of the Web site.
| | 02:11 | From the homepage of the store, we can
choose Shoes, Men's, Athletic, Running,
| | 02:16 | and we see an assortment of
shoes from which to select.
| | 02:19 | This is a taxonomic system for shoes.
| | 02:22 | Additional descriptive details,
called metadata, can help us find what we
| | 02:26 | want more effectively.
| | 02:27 | These details are actually not part of
the hierarchical categories, but they are
| | 02:32 | important for helping us
navigate and search more effectively.
| | 02:35 | Metadata include tags, labels,
descriptors, and attributes.
| | 02:39 | These additional data help us narrow
down large sets of information to smaller
| | 02:44 | sets that are more relevant.
| | 02:46 | For example, once I have arrived at
men's running shoes, I may be faced with
| | 02:49 | hundreds or thousands of choices.
| | 02:52 | How will I find the best pair of shoes for me?
| | 02:55 | I can narrow down my choices with filters,
such as size, color, brand, weight, and price.
| | 03:01 | Thousands of possible shoes can be
quickly reduced to just a dozen or two, which
| | 03:05 | is much easier for me to review,
| | 03:07 | and I know that I am more likely to
find something that matches what I need.
| | 03:11 | The navigational system of a
Web site or application, and the ability to
| | 03:15 | search, are global features; they're
always present and available, no matter
| | 03:19 | where we are on the site.
| | 03:21 | Contextual links vary from page to page,
but they are always identifiable, even
| | 03:26 | though they usually appear
within the content and functionality.
| | 03:29 | Although the appearance of the
navigation system and contextual links may
| | 03:33 | differ, there are some underlying
techniques to ensure they are findable,
| | 03:37 | usable, and effective.
| | 03:39 | The navigation system itself is placed
in a location on the site that is stable
| | 03:43 | and consistent throughout.
| | 03:45 | We often see navigation bars running
horizontally across the page in the header,
| | 03:50 | or a navigation column running
vertically down the left side.
| | 03:54 | But on mobile sites, the navigation may
be rolled up into a menu button, or placed
| | 03:58 | at the bottom of the page, below the content.
| | 04:01 | Where we choose to place the navigation,
and how much we choose to make visible,
| | 04:05 | depends on the depth and complexity of the site,
| | 04:08 | the expertise of the people using it, and
the device on which it is most often viewed.
| | 04:13 | We use a horizontal navigation bar when
a large number of categories is unlikely,
| | 04:18 | and when the persistent visibility of
the navigation is helpful to people.
| | 04:22 | Horizontal navigation is constrained by
the width of the screen on which it is
| | 04:26 | viewed, and they often have drawers or
dropdowns that open to expose the lower
| | 04:31 | levels of navigation.
| | 04:33 | We use a vertical navigation column
when there may be a large number of
| | 04:36 | categories, and when being able to see
the navigational hierarchy is helpful.
| | 04:41 | Vertical navigation can include and
show many categories, but a long list
| | 04:46 | may disappear below the bottom edge
of the browser window, which we call
| | 04:50 | falling below the fold.
| | 04:52 | Vertical navigation often opens up
like a file tree to expose the lower
| | 04:56 | levels of navigation.
| | 04:59 | We may also collapse the navigation into a
menu that must be opened with a click or tap.
| | 05:04 | We do this for small screens with
limited space, or when people are experts
| | 05:08 | with the site, and do not need the navigation
visible at all times in order to be efficient.
| | 05:13 | Navigation systems can be complex, and
we often see hybrids that combine both
| | 05:18 | horizontal and vertical location,
but all navigation should help people
| | 05:22 | understand where they are, where
they can go, and the organizational
| | 05:26 | structure of the site.
| | 05:28 | The most common way to present
contextual links in the body of a page or screen
| | 05:32 | is with text links, buttons, and tabs.
| | 05:34 | Test links were traditionally blue and
underlined, and in the early days of the
| | 05:40 | Internet, people were explicitly instructed
to click here, to teach them how to use links.
| | 05:45 | Now people understand text links,
and they have learned that just about any
| | 05:49 | text that looks different from the
text around it may be a link.
| | 05:53 | We no longer need to adhere to the
blue and underlined rule of the early
| | 05:56 | days, but we still need to ensure
that text links are distinct from the
| | 06:00 | surrounding content.
| | 06:02 | Buttons and tabs are common graphical
links, and they should have hover, or over,
| | 06:07 | and down, or pushed, states to provide
feedback when they are clicked or tapped.
| | 06:11 | We will go into this in more detail later.
| | 06:15 | When content is presented as a
sequence of pages or screens, such as a photo
| | 06:19 | gallery, or a set of product reviews, we
often navigate through them using both
| | 06:24 | an index page, and with next and
previous buttons, or pagination controls.
| | 06:29 | The index page uses links to jump
directly to any page or screen. For example, a
| | 06:34 | grid of images lets us choose which
image we want to view, and a list of
| | 06:38 | articles lets us choose
which we want to read first.
| | 06:41 | We don't have to start at
the first item in the list.
| | 06:44 | Next and previous buttons are also
contextual links that help us move forward or
| | 06:48 | backward, one item or page at a time.
| | 06:51 | These buttons may be placed near the relevant
content to make it clear what will be changing.
| | 06:57 | For many years in Web design, new
content was simply added to the bottom of an
| | 07:01 | existing page or list,
| | 07:03 | so we thought of the top as being the
oldest, and the bottom as the most recent,
| | 07:07 | such as comments on blogs.
| | 07:09 | But social networking sites have
changed our expectations of time when
| | 07:13 | information is presented vertically,
because the newest posts and comments
| | 07:17 | appear at the top, and the
older content is lower on the page.
| | 07:21 | We still see both of these
arrangements in use today, so make certain you
| | 07:25 | understand the expectations of your
audience when you choose where to put
| | 07:28 | the newest content.
| | 07:29 | One of the most common concerns
about the navigation system is depth.
| | 07:35 | Site owners often ask, how deep is this
site? How many clicks or taps does it
| | 07:39 | take to get somewhere?
| | 07:41 | You may hear generalized statements,
such as, more than three clicks is too
| | 07:45 | many, but rather than concern ourselves
with counting clicks or taps, we should
| | 07:50 | focus on the person's perception of progress.
| | 07:53 | In other words, if every click or tap
makes sense, feels like it moves the
| | 07:58 | person closer to their goal, and if
they never ask, why am I not there yet?
| | 08:02 | Then the site is not too deep.
| | 08:04 | If it takes eight clicks or taps to get
somewhere, and every click or tap felt right
| | 08:08 | to the person, then it's okay.
| | 08:11 | However, when people
start asking, am I there yet?
| | 08:14 | How much farther? And why is this taking so long?
| | 08:18 | Then there is probably a problem with
the depth and organization of the site, or
| | 08:21 | the sequence of steps to complete a task.
| | 08:24 | An organized navigation system helps
people find what they need. As long as we
| | 08:29 | feel like we're moving forward toward
our goal, and each click or tap makes
| | 08:33 | sense, then the structure and depth is good.
| | Collapse this transcript |
| Searching and filtering| 00:00 | Although navigation is visible, and
persistently available, search is also important.
| | 00:05 | Search is an expected feature of
nearly all Web sites and applications.
| | 00:09 | People have been trained by search
engines to expect to be able to find and jump
| | 00:13 | directly to the information they need.
| | 00:15 | Poor search experiences are just as
likely to lead to abandonment as poor
| | 00:19 | navigational systems.
| | 00:21 | Keyword search is the most common type;
| | 00:23 | we only need to enter the important or
keywords to get a set of matching results.
| | 00:28 | However, natural language search is
becoming more common, especially as more
| | 00:32 | people use voice recognition.
| | 00:34 | Natural language search allows people
to speak or type in sentences, the same
| | 00:38 | way they would speak to other people,
| | 00:40 | and the search system processes that
search request for meaning, identifies the
| | 00:44 | keywords, and returns relevant results.
| | 00:47 | For traditional keyword search to
find the capital of France, we might just
| | 00:51 | enter capital France,
| | 00:52 | but for natural language search, we
might speak, what is the capital of France?
| | 00:57 | Both of those search queries
would return Paris as the answer.
| | 01:01 | Natural language search
feels more, well, natural to use,
| | 01:05 | but it requires more computing power
to identify the keywords in the context
| | 01:09 | of the search query.
| | 01:10 | This is a great example of designing
smarter experiences that make the computer
| | 01:15 | do all the work, rather than pushing the
cognitive effort back on the person, and
| | 01:19 | making them decide, what are
the most important keywords?
| | 01:23 | Search can also make real-time
recommendations, suggestions, and matches based on
| | 01:28 | what we are typing into the search box.
| | 01:30 | Using the partial keyword entry,
context, and even past searching behavior, the
| | 01:35 | search engines can identify likely
matches, and present them as we type.
| | 01:39 | It is even possible to present the
suggestions not just as a list of likely
| | 01:44 | matches, but in groups that reflect
the taxonomy, metadata, and even social
| | 01:49 | sharing patterns of the
information being searched.
| | 01:51 | For example, on rdio, in my account,
I have the artist Annie Lennox,
| | 01:57 | so when I began typing Annie, the top
hit is an artist the system knows I
| | 02:01 | like: Annie Lennox.
| | 02:03 | But it also suggests other artists
named Annie, albums, and songs with Annie in
| | 02:07 | the title, playlists that have been
named Annie, and even users named Annie.
| | 02:14 | Search queries often return too many
matching results, so we need a way to
| | 02:18 | narrow down and arrange the
results into a more manageable set.
| | 02:22 | We use sorting to rearrange the
list into different sequences, such as
| | 02:26 | alphabetical, newest to
oldest; lowest to highest price.
| | 02:31 | Sorting does not reduce the size
of the list; it just reorders it.
| | 02:35 | We use filtering to reduce the number
of matching results in the list, such as
| | 02:40 | choosing to see only shirts that
are large, blue, and made of cotton.
| | 02:44 | When we reduce the size of a set of search
results, we are doing what is called winnowing;
| | 02:49 | we are removing the information
that does not match what we need.
| | 02:53 | Although we may change the keywords to be
more specific, we often rely on the filters.
| | 02:58 | Most people find it easier to narrow
down a large set of results with filters,
| | 03:02 | than to think of more effective keywords.
| | 03:05 | However, filters may actually work
two different ways. Subtractive filters
| | 03:09 | always reduce the size of the result set.
| | 03:12 | Additive filters sometimes
increase the size of the result set.
| | 03:17 | Let's take a look at a few examples.
| | 03:20 | If I go shopping on the Kohl's site for
men's shirts, you can see that there are
| | 03:24 | more than a thousand matches, but I'm
interested in large shirts that are blue,
| | 03:29 | so I begin to filter.
| | 03:31 | When I add the large filter, you can see that
the matching set reduces to just over 600,
| | 03:36 | and when I add the color blue, that set
gets smaller again; now only 238 matches.
| | 03:44 | This is subtractive filtering,
because the list is getting smaller.
| | 03:48 | Now let's take a look at additive filtering.
| | 03:51 | If I am shopping on Zappos for
men's shirts, you can see there's more than
| | 03:56 | 6000 matching items,
| | 03:58 | but as I begin to add filters,
once again that set gets smaller.
| | 04:01 | When I add the size large, now there is
only 3900 matches, and if I add the color
| | 04:06 | blue, we will see that
it gets smaller yet again.
| | 04:11 | Now, large and blue, there are 555
matching shirts, but if I decide that I think
| | 04:17 | may be medium or large is my size, I
can add the size medium to the query.
| | 04:23 | Now that number has gone up to 674,
because I'm seeing shirts that are large,
| | 04:29 | and medium, and blue.
| | 04:32 | People often cannot accurately
predict how filters will work, but once they
| | 04:36 | begin interacting with them, they
quickly learn and understand, because the
| | 04:40 | matching results provide effective feedback.
| | 04:43 | We also need to think about how we
are going to present the search results.
| | 04:47 | Different types of information can be
presented more effectively with different layouts.
| | 04:52 | News and information sites typically
show their search results as a list of
| | 04:56 | matching pages or articles.
| | 04:59 | Shopping and photography sites show
their search results as a grid of images.
| | 05:04 | Airline and travel sites show their
search results as tables, where each row is
| | 05:08 | a match, and each column contains
different relevant information.
| | 05:12 | Search engines often display matching
results in different or hybrid formats
| | 05:16 | based on the information.
| | 05:18 | We may see matching images included in
a list, preview images of Web sites, and
| | 05:22 | tabular data for detailed information.
| | 05:25 | Search results are often a very
large set of information, even after being
| | 05:29 | filtered and sorted,
| | 05:31 | so we need a clear and simple way to
show the results in manageable sets.
| | 05:35 | We also need to consider bandwidth or
connection speed, because people on slow
| | 05:39 | connections do not want to
wait for everything to load,
| | 05:42 | so we should show the results in smaller sets.
| | 05:45 | Pagination is the most common solution;
| | 05:47 | we present a specific number of
results per page, and allow people to easily
| | 05:51 | move from page to page.
| | 05:53 | We might also give them the ability to
change the number of results per page, or
| | 05:57 | even the option to view all if they want.
| | 06:00 | Pagination controls should always
indicate the page you're currently on, the
| | 06:04 | total number of pages, and have the
ability to go to the next or previous page.
| | 06:09 | We also typically show at least a limited range
of numbers on either side of the current page.
| | 06:14 | A more recent design pattern for
displaying large sets of information is
| | 06:18 | called infinite scrolling. Let's take a look.
| | 06:22 | On this site, when it first loads, it
preloads a few screens of information,
| | 06:27 | but as I scroll down -- watch the scroll
bar on the right hand side -- and I approach
| | 06:32 | the bottom of the content that was
initially loaded, you can see the Web site
| | 06:36 | loads more, and the page just gets longer.
| | 06:39 | I never quite reach the bottom of the
page, but as I approach it, the Web site
| | 06:43 | loads more, and it gets longer.
| | 06:47 | This is infinite scrolling.
| | 06:50 | The site manages bandwidth requirements
by only loading more information as we
| | 06:54 | approach the end of the current set.
| | 06:56 | If we do not scroll down, then
no more information is loaded.
| | 07:00 | An alternate form of infinite
scrolling does not load more information
| | 07:04 | automatically; it waits for people to
reach the bottom of the page, gives them
| | 07:07 | the option to load more,
which then makes the page longer.
| | 07:11 | Two important topics when discussing
search are search engine optimization, SEO,
| | 07:17 | and search engine marketing, SEM.
| | 07:18 | SEO focuses on improving the rank and
placement of a Web site in the search
| | 07:24 | results for natural or organic search based
on keywords, not paid advertising and placement.
| | 07:30 | SEM focuses on improving the rank and
placement of a Web site in the search
| | 07:34 | results by a paid placement, and advertising.
| | 07:37 | This is a deep and complex field beyond
our discussion here today, but if you're
| | 07:42 | interested in SEO, there is more
information available here on Lynda.com.
| | 07:47 | Many people arrive at Web pages via
search, which means that they may not have
| | 07:51 | started on the homepage,
and navigated in more deeply.
| | 07:54 | This is why it's essential that a
site's navigational system be clear,
| | 07:58 | meaningful, and predictive.
| | 08:00 | If people start at the middle, they
need to be able to identify where they are,
| | 08:04 | and where they can go from here.
| | 08:05 | More about this in a moment, when
we discuss having a sense of place.
| | Collapse this transcript |
| Contextual relevance| 00:00 | The traditional approach to navigation
has been to help people locate and go to
| | 00:04 | the information or functionality they
need, through a series of steps or pages,
| | 00:08 | much like a dictionary or telephone book.
| | 00:11 | We flip through the pages arranged by
the structure of the information until we
| | 00:14 | get to the content we seek.
| | 00:16 | Search has changed the way we find
information; now we tell a Web site or
| | 00:20 | application what we want, and
it brings the content to us.
| | 00:23 | Links operate parallel to the
navigation and search, and they connect
| | 00:27 | related pieces of content.
| | 00:29 | They are direct paths based
on relevance and meaningfulness.
| | 00:33 | The original concept of the hyperlink
was that all information could be
| | 00:36 | interconnected to all
other related information.
| | 00:40 | This is still how we browse, and this is
how we are able to casually spend hours
| | 00:44 | watching funny videos of cats.
| | 00:46 | But when we know what we need, we
don't want to browse or navigate along long
| | 00:50 | paths to get to it; we want shortcuts,
or better yet, we want the Web site or
| | 00:55 | application to make that information
available to us right where we are.
| | 01:00 | There are three ways that Web sites
and applications can provide people with
| | 01:03 | contextually relevant
content at the moment they need it.
| | 01:07 | Shortcut links go
directly to another related page.
| | 01:10 | Contextual menus, accessed with a click
or tap, present options relevant to the
| | 01:15 | information selected.
| | 01:17 | Dynamic layers deliver content or
functionality above the current page, and can
| | 01:22 | be easily closed when we're finished.
| | 01:24 | For example, a calculator tool can be
accessed when needed on a banking Web site.
| | 01:29 | Although we've had layers of information for
decades, they haven't always been effective.
| | 01:34 | Remember when we might have dozens
of separate browser windows open at the
| | 01:37 | same time, and pop-up ads that
couldn't be closed fast enough? All of these
| | 01:42 | early terrible experiences taught people to
avoid layers of content in multiple windows.
| | 01:47 | A few years ago, our Web browsers gave
us tabs instead of multiple windows, and
| | 01:52 | made it possible to display content
that appears above the Web page, but is
| | 01:56 | actually still part of the page.
| | 01:58 | We've become more familiar and
comfortable with layers of information, and they
| | 02:02 | are now being used more effectively.
| | 02:04 | We no longer think only about moving
forward and going back on Web sites; we also
| | 02:09 | think about moving up and down
through layers or stacks of information.
| | 02:14 | It doesn't confuse us to have new
layers appear on top of our current layer. We
| | 02:19 | know we can close the layer, or move it
out of the way to go back down to where
| | 02:23 | we were. Down is the new back.
| | 02:26 | When using layers, we don't always
cover all of the underlying page. We often
| | 02:30 | allow some of the information
underneath to peek out and remain visible, as a
| | 02:34 | reminder of where the person was when
the layer appeared, and where they'll
| | 02:38 | return when the layer is closed.
| | 02:40 | There are a few key considerations
to ensure that the design maintains
| | 02:44 | context for the person.
| | 02:46 | Can the layer be dragged around to expose
different parts of the underlying page or screen?
| | 02:50 | Sometimes people need to be able to
see information from the underlying page
| | 02:55 | in order to complete a task in a layer, so
being able to move the layer can be helpful.
| | 03:00 | Is it clear how to close and exit the layer?
| | 03:04 | There should be clear icons and links
for closing, and often people can simply
| | 03:08 | click or tap outside of the layer to close it.
| | 03:11 | Will there be an obfuscation layer?
That is, a semitransparent layer that
| | 03:15 | partially conceals the underlying page.
| | 03:18 | We use obfuscation layers when we want
people to be focused on the content layer,
| | 03:23 | and when they do not need to see the
information below it, yet we still want to
| | 03:26 | maintain the context of the original page.
| | 03:30 | If done correctly,
multiple layers can be stacked;
| | 03:33 | it just needs to make sense to us.
| | 03:35 | We need to maintain context, and have a
clear way of understanding that the top
| | 03:39 | layer is associated with the content and
functionality below it, that there is a
| | 03:43 | way to close the current layer to go
back, and there is a way to move the layer
| | 03:48 | aside if we need to refer
to the information below it.
| | 03:51 | Let's take a look at how
multiple layers can work.
| | 03:56 | On my Google+ account, in my photo
albums, at the lowest level, you see the
| | 04:01 | images, but when I mouse over one, it
expands in a content layer to show me a
| | 04:06 | full preview of that image. And when
I select it, it loads it into a layer
| | 04:11 | above my photo album to show me the
image, and the comments that are
| | 04:15 | associated with it.
| | 04:17 | And if I choose to edit this photo, I
can open the creative Kit, and it starts
| | 04:23 | another layer on top of the
original comments and photo layer.
| | 04:28 | And now, at this point, I'm actually
three layers deep in content: the editing
| | 04:33 | layer, the photo and comments layer, and the
original photo album layer beneath it all.
| | 04:41 | When using layers,
| | 04:42 | make certain that all of the information
and functionality is presented in that layer.
| | 04:46 | We would not rely on people remembering
information from one page to another, so
| | 04:51 | we should not rely on them remembering
information from one layer to another.
| | 04:56 | We should be looking for ways to
design smarter interactive experiences.
| | 05:00 | Stop the scavenger hunt.
| | 05:02 | Don't force people to go
looking for what they need.
| | 05:05 | Instead, design Web sites and applications
that bring content and functionality
| | 05:09 | to the person when they
need it, such as using layers.
| | 05:14 | Allow them to interact with it, where
they are, rather than forcing them to
| | 05:18 | leave where they are, and what they're
doing, and navigate somewhere else, only
| | 05:22 | to return to this place when they're finished.
| | Collapse this transcript |
| Sense of place| 00:00 | As we discuss navigation, we
stress the importance of context, and knowing
| | 00:04 | where we are in a Web site or application.
| | 00:07 | When we move through a site or
application, we develop a sense of place.
| | 00:11 | We know where we are, how we got there, and
we have expectations about where we can go.
| | 00:15 | We even develop a mental model of the
overall structure and organization of
| | 00:19 | the site in our minds.
| | 00:20 | It's like building a sitemap in
our heads by using our experience.
| | 00:24 | This ability to develop a sense of
place, and to predict where we might find
| | 00:28 | things, is called the scent of
information, and it is based on the research of
| | 00:32 | Peter Pirolli, who studied
information foraging theory.
| | 00:36 | Essentially, we identify a trail of
information. One piece leads to another,
| | 00:41 | through meaningful connections, and
when that trail is broken through a bad
| | 00:45 | link, inaccurate search, or poor
navigation, we need to go back, pick up the
| | 00:50 | trail, and start again.
| | 00:52 | But when we have good context and
meaning, and a strong sense of place, people do
| | 00:56 | not need to go back and start over.
| | 00:59 | When we see people going back again and
again, we know that we have a poor scent
| | 01:03 | of information, and the
opportunity to make it better.
| | 01:07 | We can establish context and a sense of
place with just a few simple techniques.
| | 01:12 | First, use the navigation system
itself as an indicator to show the location.
| | 01:17 | Highlight the labels that
led to the current page.
| | 01:20 | People can look at the
navigation to identify their section.
| | 01:24 | Second, breadcrumbs, often placed just
below the navigation, show the specific path
| | 01:30 | in the site map that leads to the
current page, and each entry in the breadcrumb
| | 01:34 | is a shortcut link to that
hierarchically higher page.
| | 01:38 | Third, page headers, subheaders, and
content cues, such as bullet lists, and
| | 01:44 | images, help identify the current location.
| | 01:47 | Although the content of the current
page or screen may not reveal the actual
| | 01:51 | location in the sitemap, the content
can help clarify the relationship with
| | 01:56 | prior pages or screens.
| | 01:58 | And on a more technical note, even
the address, or URL, of the page can help
| | 02:02 | communicate a sense of place for some people.
| | 02:05 | Human readable URLs often expose
the structure of the sitemap, so savvy
| | 02:10 | people who lose their sense of place
may be able to look at the URL to gain a
| | 02:14 | better understanding.
| | 02:16 | We can often determine if a person has
a good sense of place on a Web site or
| | 02:20 | application by observing
their navigational behavior.
| | 02:23 | If they make relatively quick, deliberate
navigational choices, and they rarely
| | 02:28 | back up, then the system is communicating
where they are, and setting appropriate
| | 02:32 | expectations about where each link,
button, icon, or tab will take them when
| | 02:36 | they click or tap on it.
| | 02:38 | If a person frequently uses the Back
button, and returns to a page or screen
| | 02:42 | repeatedly, or if they appear to
randomly choose links in a trial and error
| | 02:47 | fashion, then they do not
have a strong sense of place.
| | 02:49 | People often go back multiple steps, or
they go to the homepage or a landing page,
| | 02:54 | in order to find a
recognizable or familiar place.
| | 02:57 | They reorient themselves, then navigate
more deeply again, often on a different path.
| | 03:02 | Since they do not know where they
are, they go back until they do.
| | 03:07 | When people seem to be choosing links
randomly, or are clicking systematically on
| | 03:11 | every link in a regular pattern, they
don't know where they are, and they're
| | 03:15 | simply clicking links to
see where they might lead.
| | 03:18 | This is not casual browsing; this is
a person who has given up on having a
| | 03:22 | meaningful experience, and is now
simply hoping that they'll stumble into
| | 03:25 | something recognizable, meaningful, or useful.
| | 03:29 | As we mentioned just a few minutes ago,
many people arrive at a Web site via a
| | 03:33 | search engine, and they land
somewhere deep within the site,
| | 03:36 | so it's essential that they are able to
quickly identify where they are, and why
| | 03:40 | this location is relevant.
| | 03:42 | In fact, one way to test the accuracy
and relevance of a navigational system
| | 03:47 | is to simply drop someone onto a
random page in the site or application, and
| | 03:51 | ask, where are you?
| | 03:53 | How do you think you might have gotten or
navigated here, if you had started on the homepage?
| | 03:57 | Where do you think you can
go, or navigate to, from here?
| | 04:01 | If people are able to accurately
answer all three questions, then your
| | 04:05 | navigational system and all of its cues have
contributed to a strong scent of information.
| | 04:10 | When people have a strong sense of place,
they are more likely to complete their
| | 04:14 | task, and have a positive experience.
| | Collapse this transcript |
|
|
8. How People Respond to Images and MediaDefining sensation and perception| 00:00 | Sensation and perception are stages of
processing information from our sensory systems.
| | 00:05 | Sensation occurs when an external
event, called a stimulus, causes a biochemical
| | 00:10 | reaction in a sensory organ.
| | 00:12 | This information passes from the
sensory organs, such as the eyes, ears, and skin,
| | 00:17 | to the brain, where perception occurs.
| | 00:20 | Perception is the process of becoming aware
of, and assigning meaning to, sensory stimuli.
| | 00:25 | For example, an object, such
as a bicycle, reflects light.
| | 00:30 | That light enters the eye, and
causes a reaction in the retina.
| | 00:33 | The light becomes the stimulus.
| | 00:35 | The stimulus travels to the brain, where
it is analyzed and processed, and when we
| | 00:39 | find matching information in our
memory, we recognize the object, and assign
| | 00:43 | meaning to the perception: bicycle.
| | 00:46 | But there is a blurry line
between sensation and perception.
| | 00:50 | We can only become aware of
stimuli that we're able to sense.
| | 00:53 | For example, as humans, we cannot see
infrared or ultraviolet light with our
| | 00:58 | eyes, and not all of the stimuli that
generate a signal in our nervous system
| | 01:03 | actually gets processed
for awareness and meaning.
| | 01:05 | A tremendous amount of sensory
information around us actually gets filtered out
| | 01:10 | before it even reaches the parts of the
brain that would process it for meaning.
| | 01:14 | Of all the information in the world
around us, we pay attention to a very
| | 01:17 | small, select amount.
| | 01:19 | As designers, we need to help
people focus their attention on what's
| | 01:23 | important, and we need to make it easier for
them to understand the meaning of that information.
| | 01:28 | Our senses are vision, hearing, taste,
smell, touch, and proprioception, which
| | 01:34 | is our internal sense of relative
position and movement; we use this to track
| | 01:39 | our own body parts.
| | 01:41 | But some of these are not yet
relevant to interface design.
| | 01:44 | We tend to focus on just three:
| | 01:46 | vision, because we look at interfaces;
| | 01:49 | hearing, because we listen to
audio feedback and spoken text;
| | 01:53 | and touch, we feel vibrations in
the surface when we interact with
| | 01:57 | interfaces and devices.
| | 02:00 | Proprioception is becoming
increasingly important as a design consideration
| | 02:04 | because we're beginning to create
interfaces that leverage spatial gestures.
| | 02:09 | Many devices now have sensors that
detect motion, direction, rotation, and
| | 02:14 | acceleration, and deceleration.
| | 02:16 | Computers can receive data about our
movement in much the same way our brains do,
| | 02:22 | and this gives us another way to
interact with information and the devices in
| | 02:26 | the world around us:
through gestures and motion.
| | 02:30 | Perception seems to be an effortless
process, because we do not feel like we're
| | 02:34 | thinking about everything we
sense in order to understand it.
| | 02:37 | The meaning is just present and
available to us, because most of the perceptual
| | 02:41 | processing occurs outside
of our conscious awareness,
| | 02:45 | but perception is not an
automatic, passive process.
| | 02:49 | It's influenced by our learning, experience,
memory, and expectations based on context.
| | 02:55 | Sensation and perception are both
bottom-up and top-down processes.
| | 03:01 | Sensory information is used to
build up or assemble meaningful objects.
| | 03:06 | This is a bottom-up process, where
meaning is constructed from sensory stimuli.
| | 03:11 | Perceptual processes find meaning by
disassembling and examining the parts of whole objects.
| | 03:17 | This is a top-down process, where
meaning is found by matching things from
| | 03:21 | our past experience.
| | 03:22 | For example, if I smell lemon, taste
sweetness, and feel creamy frosting on my
| | 03:28 | tongue, I may conclude that I'm
eating cake; a bottom-up process, where I
| | 03:32 | assemble individual sensory
experiences into a single meaning: cake.
| | 03:37 | And if I taste cake, hear singing, feel
happy, and see festive decorations, I may
| | 03:43 | conclude that I am at a birthday party;
a top-down process, where I examine each
| | 03:47 | object in context, and derive meaning
based on my past experience with parties.
| | 03:53 | How people respond to a device or
interface depends upon more than just how
| | 03:57 | it looks or sounds.
| | 03:59 | Their response is influenced by their
prior experiences and their expectations,
| | 04:03 | which is why we need to consider both;
how they sense the interface, as well as
| | 04:07 | the context and expectations.
| | 04:10 | What we want and expect to experience
can influence what we actually experience.
| | 04:15 | Still, there are very consistent ways
that people perceive the world, and we can
| | 04:20 | leverage these in our designs.
| | 04:22 | The Gestalt Laws -- remember proximity,
similarity, closure, and the others --
| | 04:27 | are perceptual processes we all
share, and which help us organize our
| | 04:31 | sensory experiences.
| | 04:33 | The world is a highly variable and
changing place, yet we perceive it with
| | 04:37 | remarkable stability and constancy.
| | 04:40 | Changes in lighting, distance, and
angle of vision do not change our ability
| | 04:44 | to recognize objects.
| | 04:45 | We have perceptual constancy for
shape, whiteness, color, distance, size,
| | 04:52 | location, and timbre, or the quality of sound.
| | 04:56 | We see the same color or
brightness even when the lighting changes.
| | 04:59 | For example, if the lighting changes
to red, and I hold this card, what color
| | 05:07 | does it look like to you?
| | 05:08 | But now if the lighting changes back to
white, you know that this card is white,
| | 05:15 | and you knew that even when you saw it
under red light, and even though it was
| | 05:20 | reflecting red light to your eye.
| | 05:23 | The perceptual constancy told
you that this color was white.
| | 05:30 | We understand that size and scale do not
change, even when distance to the object
| | 05:35 | changes, and we recognize the same
voices and sounds, even when there are other
| | 05:39 | sounds and noise present.
| | 05:41 | As long as we have enough sensory data
to assemble a mental representation of
| | 05:44 | the object, we can recognize it.
| | 05:47 | We take advantage of perceptual
constancy for shape and size when we use
| | 05:51 | different typefaces.
| | 05:52 | The shapes and sizes of the letters
vary greatly, yet we're still able to
| | 05:56 | quickly and accurately recognize them,
and understand the words they represent.
| | 06:01 | This is a top-down process, because we
take what we have learned about letter
| | 06:04 | shapes in our past experience,
and apply that to new objects.
| | 06:08 | The context of our perceptions involves
our current situation, and expectations,
| | 06:14 | but it also involves our
current sensory experience.
| | 06:16 | Our perceptions can be enhanced or
diminished based on the sensory information.
| | 06:21 | The same color can look different
when surrounded by different colors.
| | 06:25 | Do the smaller squares in these images
have the same or different colors in each?
| | 06:31 | This is an example of simultaneous
contrast, and affects how we perceive color.
| | 06:36 | Finally, when we interfere with
sensation and perception, we can generate
| | 06:41 | perceptual illusions.
| | 06:43 | Sometimes we can take advantage of this,
such as using camouflage to obscure the
| | 06:47 | boundaries of objects, but we usually
want to avoid creating illusions, because
| | 06:51 | it introduces inconsistency, unpredictability,
and ambiguity into the experience.
| | 06:57 | Although we cannot control the way
someone senses and perceives their world, we
| | 07:01 | can craft designs that make it easier
for people to understand information, and
| | 07:06 | how to interact with interfaces and devices.
| | Collapse this transcript |
| How people respond to color| 00:00 | Have you ever wondered why there are so
many yellow buttons, or why we often use
| | 00:05 | red, orange, and yellow to attract attention?
| | 00:07 | Even though there are sociocultural
meanings associated with colors, such as
| | 00:11 | red means danger or stop, there are
also physiological reasons why yellow and
| | 00:16 | red attract our attention.
| | 00:18 | Our eyes are actually more sensitive to
yellow, orange, and red than to any other colors.
| | 00:23 | We all know that computer monitors
create colors by mixing different ratios of
| | 00:27 | red, blue, and green light.
| | 00:29 | Well, the human eye is sensitive to color
with a similar system of color receptors.
| | 00:34 | This is described by the trichromatic theory.
| | 00:38 | We have a light-sensitive cells in
our eyes that detect red, green, and blue
| | 00:42 | light, and our brain combines the
different ratios of incoming light into all of
| | 00:47 | the colors we see in the world around us.
| | 00:50 | But why are we more sensitive yellows
and reds? Because the number of red, green,
| | 00:55 | and blue cells are not equal.
| | 00:57 | About 64% of the cells are
sensitive to red, about 26% of the cells are
| | 01:03 | sensitive to green, and about 10%,
sometimes even less, are sensitive to blue.
| | 01:09 | Since nearly two thirds of the color
sensitive cells in our eyes are sensitive
| | 01:13 | to red, it should be no surprise that
we tend to notice red things, but it's
| | 01:17 | not just the number of color sensitive
cells that determines what we pay attention to.
| | 01:22 | Each type of cell is actually
sensitive to a range of colors wider than pure
| | 01:26 | red, pure green, and pure blue.
| | 01:29 | If we look at the range of colors, and
the peak sensitivity for each type of
| | 01:33 | cell, we quickly notice that cells
sensitive to blue light are not only the
| | 01:37 | smallest percentage of cells, but
they also have the least overlap in
| | 01:41 | sensitivity with the other cells.
| | 01:44 | We're physically less sensitive to blue,
purple, and indigo than any other colors.
| | 01:49 | The green and red sensitive cells
actually have peak sensitivities very close to
| | 01:55 | one another, and they overlap
significantly in the yellow and orange range.
| | 02:00 | This means that 90% of the color
sensitive cells in our eyes are sensitive to a
| | 02:05 | common range of colors: yellows, and oranges.
| | 02:10 | Our attention is drawn to yellow and
orange, because those colors produce more
| | 02:14 | activity in color sensitive cells in
the eye and brain than any other colors.
| | 02:19 | Since we are most sensitive to yellows,
oranges, and reds, and since red has
| | 02:23 | sociocultural meaning in the West for
danger or to stop, it makes sense that we
| | 02:27 | would use these colors to attract
attention for warning and error messages.
| | 02:32 | However, we need to be careful when
using bright yellows and oranges, because
| | 02:36 | they can also cause visual fatigue.
| | 02:38 | And when everything is yellow or orange,
it can be more difficult to direct a
| | 02:42 | person's attention where we want them to look.
| | 02:45 | We can use yellow, orange, or red buttons
or icons on fields of weaker colors to
| | 02:50 | focus attention on key interaction
opportunities, but we don't have to use
| | 02:55 | yellow. We can also rely on color
differences to stand out, so a lone green
| | 02:59 | button can also draw attention.
| | 03:02 | On monochromatic sites with little color
variation, we need to actively scan and
| | 03:07 | process what we see before we
understand where we can interact.
| | 03:11 | Color theory from graphic design
can help us identify colors and color
| | 03:15 | combinations that can
effectively guide and focus attention.
| | 03:20 | For more on this topic, check out the
design courses on color here at Lynda.com.
| | 03:25 | We also need to remember that as
many as 10% of people -- far more men than
| | 03:30 | women -- have some form of color
deficient vision, and have difficulty
| | 03:34 | distinguishing colors.
| | 03:36 | The most common form of color
deficient vision involves the inability to
| | 03:40 | accurately distinguish between red and green.
| | 03:43 | A less common form of color
deficient vision involves the inability to
| | 03:46 | distinguish between yellow and blue, and
a very rare form involves the inability
| | 03:51 | to perceive any color.
| | 03:53 | When designing for color deficient
vision, we need to make certain that we
| | 03:56 | are not relying solely on color cues, and
that important colors can be differentiated.
| | 04:02 | For example, do not just use red and
green fill colors to convey information;
| | 04:07 | also use meaningful text labels, or even use
distinct shapes to make it easier to understand.
| | 04:14 | Also, choose colors that are not pure.
| | 04:16 | For example, rather than a pure red,
add a little yellow, and rather than pure
| | 04:21 | green, add a little blue.
| | 04:23 | Even minor color adjustments can make
a perceptual difference for people with
| | 04:28 | color deficient vision.
| | 04:30 | Finally, we need to be careful about
using certain color combinations, not
| | 04:34 | because they're aesthetically unpleasant,
although they may be, but because they
| | 04:39 | can lead to chromostereopsis, a
visual illusion of depth caused by specific
| | 04:44 | adjacent colors, usually
red and blue, or red and green.
| | 04:48 | When these colors are side by side, the
edges or boundaries between them can
| | 04:52 | appear to vibrate or oscillate
between foreground and background.
| | 04:56 | Chromostereopsis makes it difficult
to discern a clear edge, and it can also
| | 05:01 | lead to visual fatigue.
| | 05:03 | Choosing colors is an important
aesthetic decision for designers, but we should
| | 05:07 | also choose colors that help make it
easier for people to direct their attention,
| | 05:11 | and which support the information.
| | Collapse this transcript |
| How people respond to motion| 00:00 | One of the most effective ways to
attract someone's attention is with motion.
| | 00:05 | Movement captures our attention very
effectively, and can be used to quickly
| | 00:09 | direct the eye's gaze to a specific location.
| | 00:12 | Our brains are wired to detect change.
Think about the number of times that a
| | 00:16 | small movement in your peripheral vision
caused you to reflexively turn and look.
| | 00:21 | Nearly all organisms have this
sensitivity to motion, because it's advantageous
| | 00:25 | to pay attention to things
that move in our environment.
| | 00:29 | Is that predator coming to eat me?
| | 00:31 | We can use motion when designing
interfaces to do three things:
| | 00:35 | motion can direct attention. When we
want people to pay attention to something,
| | 00:40 | we can use movements, such as rotating or
shaking, pulsing, or zooming in and out.
| | 00:45 | Even color or brightness that fades in
or out can draw the eye to that part of
| | 00:50 | the screen, where there is important information.
| | 00:53 | Motion can be important when hiding
and revealing information. When we want
| | 00:58 | to communicate what information
currently in view is being hidden, but is
| | 01:02 | still readily available,
| | 01:03 | we can use sliding or zooming motion to
show where the information may be found
| | 01:08 | when it is needed again.
| | 01:10 | Hide and show functionality typically
uses the same type of motion, but in
| | 01:14 | opposite directions.
| | 01:15 | Information may collapse upward when
hidden, and expand downward when revealed.
| | 01:20 | Motion can make connections or associations.
| | 01:23 | When we want to show that information
is related, grouped, or being moved to a
| | 01:27 | new place or category, we can use
motion to show that information moved from
| | 01:32 | one place to another.
| | 01:34 | Drag and drop is a form of
movement performed by the person.
| | 01:37 | If we want to change the location of
something like a photo, we can drag it into
| | 01:41 | a new position or album.
| | 01:43 | We can also use sliding, zoom, and
fade transitions to show where a piece of
| | 01:48 | information has been moved, and
with what it's now associated.
| | 01:52 | Unfortunately, movement
can also lead to distraction.
| | 01:56 | Ads with animation or video are
probably the most common example of motion
| | 02:00 | causing distraction.
| | 02:02 | Most of us have had the experience of
trying to read an article online, and
| | 02:06 | having our gaze reflexively redirected
by ads elsewhere on the screen, even when
| | 02:11 | we're trying very hard to
maintain our focus and attention.
| | 02:15 | If we are distracted often
enough, we become inefficient, and
| | 02:18 | possibly frustrated.
| | 02:20 | We may even abandon what we are doing.
| | 02:22 | Banner ad creators may like this
technique to draw attention, but from the
| | 02:26 | perspective of the person viewing
the screen or page, this is often an
| | 02:30 | unwanted distraction.
| | 02:32 | Use motion carefully, because it's
easy to distract and detract from the
| | 02:36 | experience rather than
attract, focus, and enhance.
| | 02:41 | One of the reasons why motion can be
effective to direct attention is because
| | 02:45 | the movement occurs all
on the same visible screen.
| | 02:48 | We never lose view of the current information.
| | 02:51 | Items on the screen simply move to a new
location, or move out of view to a new place.
| | 02:56 | The persistent context of the screen
makes it easier to understand what the
| | 03:00 | motion communicates.
| | 03:02 | However, when information changes or
moves between subsequent views of the same
| | 03:06 | page or screen, we may not notice it.
| | 03:09 | In other words, if we don't see the movement,
or see the change happen, we don't notice it.
| | 03:14 | This is called change blindness, and it
happens when we see two successive views
| | 03:19 | of the same screen, with a slight pause
and a blank screen between them, which is
| | 03:24 | common on the Web when we
have a full page refresh.
| | 03:28 | In this example, we see two versions of
the same screen, with a brief pause and
| | 03:33 | blank screen between them, and it's
very difficult to notice what changes.
| | 03:38 | Many Web sites have this problem when
they refresh a page and change a small
| | 03:41 | part of the content.
| | 03:43 | The first view of the page and the
second view of the page are separated by a
| | 03:47 | brief pause, and a blank browser canvas.
| | 03:50 | If the second view of the page
doesn't obviously highlight the changed
| | 03:54 | information for us, we must spend more
time and effort reviewing the updated
| | 03:58 | page, and comparing the new version to
our memory of the previous version to try
| | 04:03 | and identify what's changed.
| | 04:06 | We may not even notice the difference,
yet we've expanded a large cognitive effort.
| | 04:11 | If we can avoid a full screen refresh
with a blank pause between the views, then
| | 04:16 | any changes to the content
will be much easier to detect.
| | 04:21 | This is a strength of rich Internet
applications that update only the content on
| | 04:25 | the screen that needs to be changed.
| | 04:27 | When everything else remains the same,
the information or objects that change
| | 04:31 | will draw attention, the same
way motion attracts attention.
| | 04:37 | If we must refresh the entire screen,
and risk showing a blank pause between the
| | 04:41 | views, then we need to add an extra
cues to the design to ensure that people
| | 04:45 | will notice that change.
| | 04:47 | Use a different color, or other visual
indicators, such as sticky tool tips, or
| | 04:51 | overlays, to highlight what has changed,
and use a timed fade to remove these
| | 04:56 | cues and indicators after a few seconds,
so that they don't distract the person
| | 05:01 | as they continue to interact.
| | 05:03 | Finally, we can't talk about
motion without mentioning video.
| | 05:07 | Video has become an important part
of the experiences on the Web, as we
| | 05:10 | watch more news, television, movies, and
user generated video on our connected devices.
| | 05:16 | Video on a page or screen attracts our
attention like any other motion, but it
| | 05:21 | also conveys a tremendous amount of
information, and it's often the most
| | 05:25 | important content on the page.
| | 05:27 | Delivering video involves both
design and technology considerations.
| | 05:32 | On the design side, we need to include
usable media player controls, and avoid
| | 05:36 | autoplay as much as possible, because
people like to have an internal locus of
| | 05:41 | control, and want to be in
charge of their media experience.
| | 05:45 | Although motion and video may present
a few additional design and technology
| | 05:49 | challenges, they are important types
of information, and we can design usable,
| | 05:53 | effective, and pleasing experiences by
applying the same design principles and
| | 05:58 | best practices that
apply to any type of content.
| | Collapse this transcript |
| Establishing visual hierarchy| 00:00 | We know we can influence where
people direct their attention, and how they
| | 00:04 | perceive a Web site or application.
| | 00:06 | We're more sensitive to certain
colors, and motion, and we're susceptible to
| | 00:09 | perceptual illusions, and ambiguity.
| | 00:12 | But we can take what we know about
sensation and perception, and use that to
| | 00:16 | establish a visual
hierarchy on the page or screen.
| | 00:19 | Although this really isn't a course
about visual design, many of these
| | 00:23 | techniques also apply to interaction design.
| | 00:26 | Value and contrast can attract
attention, and establish importance.
| | 00:31 | Content that is darker, and has
higher contrast with what surrounds it,
| | 00:35 | will attract attention.
| | 00:36 | Large, bold, dark font faces for headers
are effective, because they combine size
| | 00:42 | and value to draw our attention.
| | 00:44 | Color can also draw attention; orange
and yellow buttons attract attention,
| | 00:48 | because we're very sensitive to those colors.
| | 00:51 | Text links with a different color
from the text around them stand out and
| | 00:54 | attract attention, because they're different.
| | 00:57 | Whitespace, or negative space,
can help highlight information.
| | 01:01 | A bit of content surrounded by
generous whitespace will stand out and draw
| | 01:05 | attention, because it's clearly
separate from the remaining content.
| | 01:10 | Motion draws attention reflexively,
because we're sensitive to change.
| | 01:14 | When something moves in our
environment, it's difficult to not look at it.
| | 01:19 | Finally, images, especially photos
with faces, and graphics, such as charts,
| | 01:24 | icons, and illustrations, draw
attention before blocks of text.
| | 01:29 | Our eyes are nearly always
drawn to pictures before words.
| | 01:33 | It's possible to guide attention, and
the eye, by carefully selecting, positioning,
| | 01:38 | and displaying information.
| | 01:40 | We can make it more likely that people
will sense, perceive, and understand the
| | 01:44 | content on a page or screen with good design.
| | Collapse this transcript |
|
|
9. Shaping Thinking and Decisions through DesignDefining cognition| 00:00 | Cognition refers to a wide range of
our mental processes, and what we would
| | 00:04 | often refer to as thinking.
| | 00:06 | It's a broad field, and includes many
topics, including attention and memory,
| | 00:10 | reasoning and logic, and language.
| | 00:12 | Much of what is studied in the field
of cognition is relevant to our work as
| | 00:15 | designers, because it helps us
understand how we make sense and meaning of
| | 00:19 | the world around us,
| | 00:20 | solve problems, make
decisions, and even communicate.
| | 00:23 | We've already discussed perception and
attention, showing how color, motion, and
| | 00:27 | other characteristics can be used to
direct or attract attention, and why they
| | 00:31 | can also be distracting.
| | 00:33 | Memory is a deep topic.
| | 00:35 | Psychologists study how we remember
visual and verbal information, how long
| | 00:39 | we're able to remember it, what memory
cues help us remember, and why we forget.
| | 00:45 | Memory is something we often take for granted.
| | 00:47 | We expect it to work for us, and
we only notice it when it fails.
| | 00:51 | We increasingly depend upon our
computers and smart phones to help us store
| | 00:55 | information, and make it
available to us when we need it.
| | 00:58 | Our devices have become extensions of
our own memory, so it's important that we
| | 01:03 | design interfaces that make it easy
for people to enter, find, and retrieve
| | 01:07 | important information when they need it.
| | 01:10 | Language is another complex topic.
| | 01:12 | It's more than an interface being
presented in English, Spanish, or Chinese.
| | 01:16 | Language is central to
nearly all of our communication.
| | 01:19 | It makes it possible for us to transfer
information from one person to another.
| | 01:25 | The words we choose influence what
people understand, predict, and expect.
| | 01:29 | Written and verbal communication is
at the center of nearly all social
| | 01:33 | interactions. When designing, we often
need to craft interfaces that can adjust
| | 01:38 | to be displayed in different languages,
but we also need to carefully choose the
| | 01:41 | labels and create content that
will convey information accurately.
| | 01:46 | Not all of the information we remember
and think about is in the form of words;
| | 01:50 | we also use mental imagery, or
visualizations in our mind's eye.
| | 01:54 | We can see things in our mind, and think
about them as objects, and even manipulate
| | 01:59 | them, such as rotating, changing the
angle of view, or even changing color.
| | 02:04 | For example, imagine a capital
letter D, rotated 90 degrees to the left, or
| | 02:10 | counterclockwise, and place the
capital J centered just below it.
| | 02:15 | What do you see in your mind? An umbrella.
| | 02:21 | We can create completely new mental images of
things we've never before seen or experienced.
| | 02:26 | Spaceships, and monsters, and the world's
largest pizza, but we also have mental
| | 02:30 | images of things we've
seen, and which are familiar.
| | 02:33 | Our memory for images is excellent;
actually better than our memory for words.
| | 02:38 | Images can be used to represent
information, to make it easier to understand, and
| | 02:42 | to make it easier to remember.
| | 02:44 | We don't just perceive and remember
the world around us; we also strive to
| | 02:48 | understand it, and make sense of how it works.
| | 02:51 | Mental models are our thoughts and
expectations about how things work in the
| | 02:55 | real world, and they influence how we
behave, solve problems, perform tasks, and
| | 03:00 | they help us decide what to do.
| | 03:02 | Concept formation is
closely related to mental models;
| | 03:06 | we take our specific experiences,
break them down into meaningful pieces, and
| | 03:10 | sort them into general rules and groups.
| | 03:12 | For example, we have concepts about what a
car is, how it works, and how to drive it.
| | 03:19 | We can get behind the wheel of almost
any type of car, and successfully operate
| | 03:22 | it, even though we may have
never driven that type of car before.
| | 03:26 | When we encounter a new situation,
such as an electric car, we look for
| | 03:30 | similarities to our previous
experiences, and we make decisions about how to
| | 03:34 | operate it based on our mental models.
| | 03:37 | The car has no gas engine, and may not
even have a key for the ignition, but it
| | 03:41 | still has a steering wheel,
accelerator, and brakes.
| | 03:44 | So with a little exploration, and
experimentation, we're able to drive this new
| | 03:48 | type of car successfully too.
| | 03:50 | We should consider people's mental
models and existing concepts when designing
| | 03:55 | new interfaces and devices to make it
easier for them to understand it, use it,
| | 04:00 | and gain benefit from it.
| | 04:02 | An essential part of human cognition
is our ability to identify patterns and
| | 04:07 | make connections or associations.
| | 04:10 | Each experience is not an isolated incident;
| | 04:12 | we continuously look for similarities
that relate our current experiences to
| | 04:17 | our past experiences.
| | 04:19 | We are excellent at finding trends and
patterns in the information around us,
| | 04:23 | and recognizing when we have seen a
pattern or similar pattern before.
| | 04:27 | We even see patterns and attribute
meaning when there may actually be none.
| | 04:33 | Associations are the connections between
pieces of information, ideas, and experiences.
| | 04:38 | Meaning arises from the connections we
make, and our understanding of something
| | 04:42 | gets better as the
associations grow in number and strength.
| | 04:46 | Like patterns, we
continuously look for these connections.
| | 04:49 | We build a library of memories, mental
images, mental models, patterns, and the
| | 04:54 | connections between them through our
entire life, and this is how we find
| | 04:58 | meaning, and understand the world around us.
| | 05:01 | And we use all of this for our
powers of reasoning and logic.
| | 05:04 | We make decisions and solve
problems based on what we know,
| | 05:08 | our understanding of the world, the
connections among information and ideas, and
| | 05:13 | our expectations of what's likely to
happen as a consequence of our actions.
| | 05:17 | Everything we've ever done, seen,
heard, experienced, learned, and even
| | 05:21 | imagined, contributes to our
future behavior; what will we do next?
| | 05:26 | So we're not just designing for the
specific moment of interaction; we need to
| | 05:31 | consider what information and
experience people bring with them, their
| | 05:35 | understanding of their past
experiences, and of their current situation, their
| | 05:39 | needs, and their
expectations of what will happen.
| | 05:42 | When we design for this larger context,
we can craft interfaces and experiences
| | 05:47 | that fit their concepts, match their
mental models, and meet their needs.
| | Collapse this transcript |
| Cognitive biases| 00:00 | Unfortunately, cognition is not
perfect, and we do not always arrive at the
| | 00:04 | correct meaning or understanding.
| | 00:06 | We actually have cognitive biases
that affect our decision making and
| | 00:10 | problem solving, so we don't always choose
the ideal outcome, or act in the optimal way.
| | 00:16 | A cognitive bias is a pattern
of poor analysis and judgment.
| | 00:19 | They're innate; simply part of our
mental processes, and the even when we know
| | 00:24 | they exist, they're difficult to avoid.
| | 00:27 | These biases may occur because we're
using a good cognitive process, but at the
| | 00:32 | wrong time, or in the wrong situation.
| | 00:35 | There may be survival benefits to acting
quickly, than to making certain we're correct.
| | 00:40 | There may be perceptual distortions,
such as strong emotions that change how
| | 00:44 | we allocate attention,
| | 00:46 | and we may simply lack the time and
ability to understand and act appropriately.
| | 00:50 | Just as there are many aspects to
cognition, there are also many cognitive
| | 00:55 | biases, but we'll focus our discussion
on only a few that are most relevant to
| | 00:59 | interaction design.
| | 01:00 | Some of the cognitive biases that are
most relevant to our work include the
| | 01:04 | framing effect, which explains why
the way information is presented
| | 01:08 | influences how we understand it, and
the decisions we make. We'll talk more
| | 01:12 | about this effect later.
| | 01:14 | The isolation effect tells us that
things which are unique or stand out from
| | 01:19 | others are more likely to be
remembered, and may be considered more important.
| | 01:23 | The mere exposure effect explains
our tendency to like things simply
| | 01:28 | because they are familiar.
| | 01:30 | There are dozens of cognitive biases
and effects. Although we only have time
| | 01:34 | to mention a few here,
| | 01:35 | many are relevant to interaction
design, and there are books and resources
| | 01:38 | available from which you can
learn more if you're interested.
| | 01:42 | We can interfere with accurate
cognition, and manipulate attention, memory,
| | 01:47 | understanding, and decision making.
| | 01:49 | We've already discussed how we can
direct attention with color, how dark
| | 01:53 | patterns can maliciously mislead
people, and influence them to make bad
| | 01:57 | decisions, and how ambiguous labels and
content can lead to confusion and poor choices.
| | 02:03 | We should know about cognitive biases,
not because we want to manipulate people,
| | 02:07 | but because we can identify situations
where the biases may occur, and attempt to
| | 02:12 | create designs that minimize them.
| | 02:15 | If we know that people tend to fixate
on a single piece of information, and that
| | 02:19 | more recent information is considered
more important, then we should be able to
| | 02:23 | design interfaces that make it
easier to make better decisions.
| | 02:27 | Although we can make connections
between interaction design, and the many
| | 02:31 | facets of cognitive psychology, there
are a few topics that are particularly
| | 02:34 | relevant to our work.
| | 02:36 | Let's start by focusing on
representation of information, framing, mental models,
| | 02:42 | and the concept of cognitive load.
| | Collapse this transcript |
| Communicating with labels and icons| 00:00 | With digital interfaces, we need to
communicate efficiently, because we have a
| | 00:04 | limited amount of space available on
the screen, so we need to carefully choose
| | 00:08 | labels, and icons, and where they're placed.
| | 00:10 | Labels include navigation links, button text,
image captions, and headers, and subheaders.
| | 00:16 | They communicate factual information,
reflect the classification of that
| | 00:20 | information, and reveal the
organization of the Web site or application.
| | 00:24 | Images and icons resemble, or portray the
information and functionality they represent.
| | 00:30 | We rely on our memory, language,
pattern recognition, and associations to
| | 00:34 | understand what we see and perceive.
| | 00:37 | The navigation on a Web site is often
hierarchical, and how the links and labels
| | 00:42 | are presented influence how
we understand that hierarchy.
| | 00:45 | Two common techniques for representing
this structure are indenting, and grouping.
| | 00:52 | Grouping related items, and simply
indenting them visually communicates the hierarchy.
| | 00:57 | Labels and links that are indented below
another are perceived as subordinate to it.
| | 01:02 | The text labels we use for the
navigation, links, and on tabs and buttons
| | 01:07 | should be meaningful and unambiguous,
and we should use active, direct
| | 01:11 | verbs when possible.
| | 01:13 | They should help people accurately
predict what will happen, or where they will
| | 01:17 | go, if they were to click or tap on it.
| | 01:19 | For example, OK and Cancel
are notoriously ambiguous.
| | 01:24 | So use labels that directly
correspond to what people are accepting or
| | 01:29 | rejecting, such as yes, cancel my
account, and no, I want to keep my account.
| | 01:36 | The labels should also be consistent
throughout the site or application.
| | 01:40 | Do not change the labels from page to
page when they go to the same place, or
| | 01:44 | perform the same function, because when
labels change, people think that the link
| | 01:49 | or function has changed.
| | 01:51 | One of the challenges when
creating a navigation system is choosing
| | 01:55 | categories with minimal overlap, and then
assigning clear, unambiguous, and predictive labels.
| | 02:01 | If people cannot determine which
section of a Web site contains the information
| | 02:06 | they seek, then there is likely to be a
problem with either the underlying
| | 02:10 | categories, such as too much overlap in
the content, because they're not exclusive,
| | 02:15 | or there's a problem with the labels;
ambiguous or vague labels that make it
| | 02:19 | difficult to predict where
information may be found.
| | 02:22 | Not all navigation relies on text labels;
| | 02:25 | we also use small images, called thumbnails,
and icons as links to content and functionality.
| | 02:32 | Like text labels, icons and images
need to be recognizable, and meaningful.
| | 02:37 | One way to help people learn what icons
represent is to use labels and tool
| | 02:41 | tips in conjunction with them.
| | 02:43 | Some Web applications give people the
choice of displaying labels and tool tips.
| | 02:48 | This is helpful for novices who are
learning the tool, but expert users no longer
| | 02:52 | need that assistance, and may be able
to focus, and work more effectively, by
| | 02:56 | turning off the labels and tool tips.
| | 02:59 | While images are often photographs, or
detailed illustrations, tend to be larger,
| | 03:04 | icons tend to be smaller and more symbolic.
| | 03:07 | Icons need to be identifiable, and
unambiguously represent their content or function.
| | 03:13 | Icons are metaphors;
| | 03:15 | the image represents the information or action.
| | 03:18 | Some icons are easily understood,
because they correspond directly to what we
| | 03:22 | know, and have experienced.
| | 03:24 | For example, roadway signs visually warn
us of falling rocks, slippery surfaces,
| | 03:29 | and twisty corners, but some icons
have learned associations, because they are
| | 03:34 | abstract, arbitrary, or have lost
their association with real experiences.
| | 03:39 | For example, biohazard and
radioactive symbols are abstract and arbitrary.
| | 03:44 | We must actively learn what they mean,
and make the association between the
| | 03:48 | symbol, and what it represents.
| | 03:50 | The Save icon in many software tools
still looks like a floppy disk, even though
| | 03:55 | floppy disks have mostly
disappeared, and we increasingly rely on saving
| | 03:59 | information to the cloud.
| | 04:01 | This icon has lost its connection to
the real world for many people, and so it
| | 04:05 | must be learned as if it
were a more abstract symbol.
| | 04:09 | When an icon is abstract or arbitrary,
it's important to use labels, and provide
| | 04:14 | context to facilitate
understanding, and set the correct expectations.
| | 04:18 | Text labels for icons also need to
be clear and unambiguous to avoid any
| | 04:22 | confusion, misunderstanding, and
incorrect choices or decisions.
| | 04:27 | So, like text labels, images and
icons are easiest to learn when they're
| | 04:31 | meaningful and recognizable, and once
they're learned, they provide a quick and
| | 04:35 | efficient way to access
content and functionality.
| | Collapse this transcript |
| Framing choices| 00:00 | How information is presented influences
how we process it, how we make meaning
| | 00:05 | of it, and how we use it to
make decisions and solve problems.
| | 00:08 | This is called the framing effect,
and it's one of the cognitive biases we
| | 00:12 | mentioned just a moment ago.
| | 00:14 | A famous example of the framing effect
simulated how people on juries understand
| | 00:19 | and make decisions about
testimony they hear in court.
| | 00:22 | Two groups of people were shown the
same photo of a car accident, but given
| | 00:26 | different verbal descriptions.
| | 00:28 | The red car bumped into the blue car,
versus the red car smashed into the blue car.
| | 00:33 | People came to different conclusions
about the severity and fault of the
| | 00:37 | accident, based on the description they heard.
| | 00:40 | The words bumped and smashed were
used to frame the same photo in two
| | 00:45 | very different ways, which led
people to think about, and remember the
| | 00:49 | photos differently.
| | 00:50 | People who heard bumped described
the accident as minor, but people who heard
| | 00:55 | smashed described the accident as severe.
| | 00:59 | Remember, they saw the same photo,
and all other details were identical.
| | 01:04 | One word changed how they thought
about, and remembered the accident.
| | 01:09 | The way we label things, the
location and placement of content and
| | 01:12 | functionality, and the images and
information we include, all influence how
| | 01:17 | people understand it, and the choices they make.
| | 01:20 | Edward Tufte famously explained how
the data and charts about the temperature
| | 01:25 | and the safety of the fuel systems
of the space shuttle Challenger were
| | 01:28 | presented in a way that lead expert
rocket scientists to make the wrong decision,
| | 01:33 | and launch when they should not have.
| | 01:36 | Although, most Web sites and mobile
applications are not as mission critical as
| | 01:41 | an interface for NASA, we still
need to ensure that information and
| | 01:44 | functionality are presented in ways
that do not result in misunderstanding, and
| | 01:49 | incorrect expectations or predictions.
| | 01:52 | Clear, unambiguous labels, active verbs,
and information in logical groups all
| | 01:57 | enhance understanding, and improve
decision making, but framing effects involve
| | 02:01 | more than just labels.
| | 02:02 | They occur from the combination of the
situation, information, and the choices available.
| | 02:08 | These effects are not simple, and they
don't depend on a single design element, so
| | 02:12 | they can be hard to detect.
| | 02:14 | For example, if I've logged in to my
brokerage to check on my retirement
| | 02:18 | account, and I see that the stock
market is down that day, that one of my
| | 02:22 | mutual funds lost value, and the first
action I see is to sell that fund, I may
| | 02:27 | be more likely to sell it.
| | 02:29 | If I don't see or realize that the
stock market is only down slightly for the
| | 02:33 | day, but is up for the year, and that
my mutual fund has still gained value
| | 02:37 | overall, then I may act based on what I see.
| | 02:41 | My decision to sell is framed by the
negative information about losses, and
| | 02:45 | seeing the option to sell.
| | 02:47 | This choice makes sense
given the context and content.
| | 02:51 | However, I may make a different
decision if I had been shown the market
| | 02:55 | trend over the time, the mutual fund
value over the time, and if there were
| | 02:59 | multiple actions available.
| | 03:02 | We cannot always foresee when framing
effects may occur, because context and
| | 03:06 | content change over time, and
from person to person, but as long as we
| | 03:10 | remember that our understanding of
information, and the decisions we make
| | 03:14 | are relative, then we can strive
to design for more clarity, and more
| | 03:18 | objective meaning.
| | Collapse this transcript |
| Mental models| 00:00 | A mental model is a person's
internal representation of external reality;
| | 00:05 | our understanding of how things
work in the real world, and the
| | 00:08 | relationships among them.
| | 00:10 | We all have expectations based on our
learning and experience, and we bring
| | 00:13 | these mental models with us to every
situation, problem, and interaction.
| | 00:19 | Mental models have a structure that
closely matches what they represent.
| | 00:22 | They help us make predictions about
what will happen next, and they're simpler
| | 00:26 | than the real thing they represent,
because they model the ordinary or typical
| | 00:31 | thing, not every possible variation.
| | 00:34 | You have in your mind a concept of a
car, but it doesn't include detail about
| | 00:38 | every car you've ever seen.
| | 00:40 | This generic concept is a mental model.
| | 00:43 | People plan their actions and make
decisions based on their mental models.
| | 00:47 | If their mental model closely matches
the actual behavior of the device or
| | 00:51 | interface, then people make accurate
predictions, and correct decisions, and
| | 00:56 | choose appropriate actions.
| | 00:57 | But mental models are only part of the story.
| | 01:00 | There are two other types
of models we need to discuss.
| | 01:03 | The mental model is in the mind of the
person using the interface or device, but
| | 01:08 | a conceptual model describes
how the interface actually works.
| | 01:13 | As designers, our mental models can
influence the design process, so the
| | 01:18 | conceptual model is a combination of
design, and the technology of the device.
| | 01:23 | The system model is how the interface
or device actually works on the inside.
| | 01:28 | For example, you have a mental
model for how an address book works.
| | 01:32 | You enter names alphabetically, and
write down addresses and telephone numbers.
| | 01:36 | If we design a digital address book,
we could create an interface that allows
| | 01:40 | people to enter information, search it,
sort it, and maybe even share copies of it.
| | 01:45 | Some of these actions could never be
done with an old-fashioned paper address
| | 01:49 | book, but the conceptual model
for the digital version allows it.
| | 01:53 | The system model describes the
structure of the database, the search and sort
| | 01:58 | algorithms, and the protocols for
transmitting and receiving information.
| | 02:02 | The system model should
be invisible to the person.
| | 02:05 | We rarely need to know how something
works on the inside in order to use it.
| | 02:10 | Most of us drive very effectively, yet we
couldn't personally repair our car engines.
| | 02:15 | As designers, we need to know about the
system model, but the end user does not.
| | 02:21 | Ideally, the conceptual model and the
mental model should be very similar.
| | 02:26 | When they match, people are able to
quickly and easily learn how to use an
| | 02:29 | interface or device.
| | 02:31 | However, sometimes there's a difference
between the designer's conceptual model,
| | 02:36 | and the person's mental model.
| | 02:38 | This may be due to different experiences,
or simply because the designer knows
| | 02:42 | more about the device or
interface and the system model.
| | 02:45 | In other words, the act of designing
a device or interface helps us better
| | 02:51 | understand it, which makes it
seem easier and more obvious to us.
| | 02:56 | This is experience and insight that
other people don't have, so the device or
| | 03:00 | interface seems more
difficult, and less meaningful to them.
| | 03:04 | Since we make design assumptions and
choices based on our own experiences and
| | 03:09 | understanding, it's important to
test designs with other people.
| | 03:14 | Many of us have had the experience
of observing people in usability or
| | 03:17 | prototype testing, and wanting to yell out,
click on the big button right in front of you!
| | 03:22 | It's the only big button on the screen, and
it even has a perfectly clear label on it!
| | 03:27 | Yet we watch people repeatedly look over
it, and look right at it without acting.
| | 03:33 | It's not because the button is hard
to see, but because the person has a
| | 03:36 | different mental model of how the process works,
| | 03:40 | what they should be doing at that moment,
and even what the action should be called.
| | 03:44 | There are four main components to modeling
interactions with devices and interfaces.
| | 03:50 | Our designs need to make certain that
information is presented in a way that's
| | 03:53 | familiar and meaningful.
| | 03:55 | We need to understand where people
expect to find information on the device or
| | 04:00 | interface, and we need to understand
the sequence of events that will logically
| | 04:04 | take them through the experience,
without causing confusion, and which will
| | 04:07 | produce the desired results.
| | 04:09 | For example, if we're designing an
e-commerce site, people have mental models
| | 04:13 | about the cart, and the checkout process.
| | 04:16 | This example closely matches our mental
model; we expect to see a shopping cart
| | 04:21 | or bag icon at the top of the page,
and we expect to be able to start the
| | 04:26 | checkout process from there.
| | 04:29 | However, if we were to use an obscure
icon, or unfamiliar label, omit important
| | 04:34 | information, or place the feature in
an unexpected location, then we would
| | 04:38 | violate the mental model,
slow people down, and cause confusion.
| | 04:43 | Once we start the checkout process, we
expect to tell the store where to send
| | 04:47 | the items, and we expect to enter our
payment information at the end, just
| | 04:51 | before sending our order.
| | 04:54 | If we ask for information in the
wrong order, or ask for additional
| | 04:58 | information that is perceived as
unnecessary, such as gender, and birthday,
| | 05:02 | then we violate the mental model.
| | 05:05 | As designers, we strive to create experiences
that closely match people's mental models.
| | 05:10 | When a device or interface behaves
the way people expect, it's easier
| | 05:14 | to understand and use.
| | Collapse this transcript |
| Understanding cognitive load| 00:00 | Thinking is hard, and people
seek ease, and minimal effort.
| | 00:04 | We need to craft designs that make
it easier and simpler to understand
| | 00:07 | information, and complete their tasks.
| | 00:10 | We use the terms cognitive load, and
cognitive friction when we're discussing how
| | 00:14 | much effort people have to put into
understanding information, making decisions,
| | 00:19 | and solving problems.
| | 00:20 | Not all cognitive load is bad; sometimes
people really do need to think hard
| | 00:25 | to understand something, and make good
decisions, such as choosing a physician, or a college.
| | 00:30 | But when we unnecessarily increase the
load, and slow people down when something
| | 00:35 | could be easier, then we've
caused cognitive friction.
| | 00:39 | Cognitive load is the level of effort
associated with thinking, reasoning, and remembering.
| | 00:44 | There's only so much information we
can pay attention to, think about,
| | 00:48 | and remember at a time.
| | 00:50 | If we exceed a person's level of
ability to process information, then some
| | 00:55 | information gets overlooked,
forgotten, or misunderstood.
| | 00:58 | We have even coined the phrase
information overload to describe those situations
| | 01:03 | when there's just too much to think about.
| | 01:05 | With this in mind, there are some
things we can do to reduce cognitive load, by
| | 01:11 | carefully managing the demands on
attention, memory, and thinking.
| | 01:15 | Attention is a limited resource.
| | 01:18 | We cannot pay attention to
everything around us all the time.
| | 01:22 | We focus our attention on what we think
is the most important, and we expend our
| | 01:26 | cognitive effort on that.
| | 01:27 | We get frustrated when something
distracts us, because it steals our attention.
| | 01:32 | When there's too much information,
when it's disorganized, or unstructured,
| | 01:36 | and when we cannot identify the priorities,
or importance, then we have trouble focusing.
| | 01:41 | This slows us down, and makes it more
likely we'll overlook or omit something,
| | 01:45 | make incorrect choices, or have
incorrect expectations about the outcomes.
| | 01:50 | Our short-term, or working memory, which
is the memory for what we are actively
| | 01:54 | thinking about at any given
moment, also has a limited capacity.
| | 01:59 | The amount of information we can keep in our
minds, and think about at once, is constrained.
| | 02:04 | Different studies have
described these memory limits.
| | 02:07 | Some have identified the limits in terms of
the number of pieces or chunks of information.
| | 02:12 | In 1956, George Miller concluded
that we can keep about seven pieces of
| | 02:16 | information in short-term memory.
| | 02:18 | And in 2001, Nelson Cowan concluded
that we can keep four chunks of information
| | 02:23 | in short-term memory.
| | 02:25 | Research by Alan Baddeley in 1992
concluded that working memory is limited to
| | 02:30 | the amount of information we
can rehearse in about two seconds.
| | 02:35 | Despite the differences in research and
theory, we need to acknowledge that our
| | 02:39 | short-term or working memory is limited,
| | 02:41 | and when we expect people to remember
too much, we overload their memory, and
| | 02:46 | some information will be lost.
| | 02:49 | In order to make it easier for
people to remember information, we need to
| | 02:53 | understand the difference
between recall, and recognition.
| | 02:57 | Recall is what we traditionally
think of when we describe memory.
| | 03:01 | We store information in memory,
and retrieve it when necessary.
| | 03:05 | For example, what's your best friend's
birthday? Who was your second grade teacher?
| | 03:11 | You were not just thinking about those
things, so you have to dig into your
| | 03:15 | memory to find them.
| | 03:17 | If you were able to access that
information, then you successfully recalled it.
| | 03:22 | Recognition is when we are asked to make
a correct selection from a set of choices.
| | 03:27 | We don't need to dig deep into our
memory to find the information; we simply
| | 03:31 | need to review some
options, and decide which is best.
| | 03:35 | For example, given a list of addresses,
you could identify those where you have
| | 03:39 | lived from those where you have not.
| | 03:42 | You don't need to recall
all of your past addresses;
| | 03:45 | you only need to recognize them.
| | 03:48 | Recognition is cognitively easier than recall,
| | 03:51 | and luckily, many of our experiences
with interfaces and devices involve
| | 03:55 | selecting from choices, rather
than actively recalling information.
| | 04:00 | Still, we should design
with memory limits in mind.
| | 04:04 | Show people information and
functionality when they need it.
| | 04:07 | Don't make them remember it.
| | 04:09 | Put information and functionality
where people expect to find it.
| | 04:13 | Don't hide it or make
them go looking for it.
| | 04:16 | Present information in meaningful
ways, using good labels, and good icons.
| | 04:21 | Don't make them stop and
think, what does this mean?
| | 04:24 | There are also a few things we can do to
reduce cognitive load with smart defaults.
| | 04:29 | Identify smart defaults based on
context, usage patterns, and past behavior.
| | 04:35 | Present smart defaults when they're
likely to be correct, and make default
| | 04:39 | choices easy to change when they're not.
| | 04:43 | In this example, I've done a search for
pizza on Google, but it has detected my
| | 04:48 | location, and it automatically selects
that, and it ranks the search results based
| | 04:53 | on my location, yet it's easy to
change my location, in case it's wrong.
| | 04:58 | And it also understands that I'm likely
to want a map to a location, and so it
| | 05:03 | predicts my needs, and it
presents this information automatically.
| | 05:07 | This is very smart.
| | 05:09 | Cognitive friction slows us down.
| | 05:12 | It occurs when our thinking, reasoning,
and memory encounter things that change
| | 05:16 | over time, location, or context.
| | 05:19 | When content and functionality
change without purpose across a site or
| | 05:23 | application, it slows us down.
| | 05:26 | When content and functionality appear the same,
and actually work differently, it slows us down.
| | 05:33 | If the interaction and meaning were
consistent, we would not need to slow
| | 05:37 | down and think about it.
| | 05:39 | Common interactions would be easier to
learn, and would be more efficient, if they
| | 05:43 | used the same mental models, and
relied on the same design patterns.
| | 05:47 | For example, search filters
do basically the same thing.
| | 05:51 | They narrow down a set of results,
but they often work a little differently
| | 05:55 | on different sites.
| | 05:57 | So we experience some cognitive
friction when we have to slow down, think about
| | 06:01 | it, and learn how to use
the filters from site to site.
| | 06:04 | However, different contexts, different
problems, and even different information
| | 06:09 | may require different interactions.
| | 06:12 | For example, buying a mutual fund is
different than buying a pair of jeans.
| | 06:17 | Even though these are both purchasing
activities, we recognize that they have
| | 06:21 | some differences, but we still
expect the overall process to be similar.
| | 06:26 | Choose a product, pay for the
product, receive the product.
| | 06:30 | We don't expect buying mutual funds to
be just like buying jeans, so we're not
| | 06:34 | surprised by the differences.
| | 06:37 | However, there may be situations that
appear similar on the surface, but have
| | 06:41 | differences we do not notice, and
then we encounter cognitive friction.
| | 06:46 | We expect the interaction to happen
one way, but when it differs, we need to
| | 06:50 | slow down, think, and take the time
to understand why it was different.
| | 06:54 | For example, it may seem that paying
bills online and making a bank transfer are
| | 06:59 | basically the same thing, moving
money from one account to another.
| | 07:04 | But they're actually different
processes, using different banking systems.
| | 07:08 | Our mental model tells us it's the same,
but the system models are different, and
| | 07:13 | this can lead to different
experiences that cause cognitive friction.
| | 07:17 | This is contrary to our expectations,
because the tasks appear similar, but they're not.
| | 07:22 | There are a few things we can do
to reduce cognitive friction;
| | 07:27 | understand people's mental models,
then make certain their expectations are
| | 07:31 | aligned with the behavior and output
of the interface; the conceptual model.
| | 07:36 | Provide meaningful information
to explain context and outcomes.
| | 07:40 | Concise content, examples, and
illustrations can help answer common questions.
| | 07:46 | Help keep people focused on
the end goal of the interaction;
| | 07:49 | whether they're playing a game,
listening to music, or working on a blog post,
| | 07:54 | be clear about how to keep moving
forward to the next level, the next song, or
| | 07:58 | to the review and publish screen.
| | 08:01 | Look for opportunities to simplify the
interaction, and reduce cognitive load.
| | 08:06 | Avoid presenting too much
information at a time.
| | 08:10 | Ask for only the information necessary,
and break large tasks into smaller,
| | 08:15 | more manageable steps.
| | 08:17 | Provide access to help and instructions
for new and complex interactions.
| | 08:22 | Many applications on touch devices use
instructional overlays the first time
| | 08:27 | the application is run.
| | 08:28 | This helps people understand where
to touch, and what gestures to use;
| | 08:32 | something that would be very difficult
to communicate with written instructions.
| | 08:37 | In this example, the Pulse newsreader
shows a simple overlay the first time
| | 08:41 | it starts, and it tells you how to navigate
and select stories to read, tap, and swipe.
| | 08:48 | And as soon as you begin
interacting, the overlay goes away.
| | 08:51 | Thinking and memory require effort, but
we can design devices and interfaces that
| | 08:56 | do much of the work for us.
| | 08:58 | By understanding what increases
cognitive load, and what causes cognitive
| | 09:02 | friction, we can create conceptual
models that match the mental models, and then
| | 09:07 | only ask people to do the work that's necessary.
| | Collapse this transcript |
|
|
10. Designing for Behavior and InteractionDefining behavior for interaction design| 00:00 | Interaction design isn't about the
behavior of the interface or a device;
| | 00:04 | it's about the behavior of people.
| | 00:06 | It's not just about motions, transitions,
and animation. These are helpful for
| | 00:11 | attracting attention, making
associations, and providing structure, but once
| | 00:15 | we have their attention, how do we help them
complete their tasks, or solve their problems?
| | 00:20 | What should they do?
| | 00:22 | First, let's define what
we mean by interactions.
| | 00:26 | We're referring to what people are
doing while using a device or interface.
| | 00:30 | We need to use an operational definition,
that is, a definition based on what
| | 00:35 | we're able to objectively observe and measure.
| | 00:39 | We can operationally define
interactions in terms of what people do, such as
| | 00:43 | click, tap, and gesture.
| | 00:46 | Our tools or devices like phones,
tablets, and gaming devices, and since these
| | 00:50 | are physical devices, our work may
include ergonomics, human factors,
| | 00:55 | industrial design, and technology to
expand and broaden the design process
| | 00:59 | beyond the digital interface.
| | 01:02 | However, digital interfaces are
really about interacting with information.
| | 01:06 | Creating, modifying, and
understanding are all cognitive processes,
| | 01:11 | but these are difficult to observe and
measure the way we could count clicks or taps.
| | 01:16 | It's more difficult for us to define
thinking about information, or extracting
| | 01:21 | meaning from data as an observable
behavior, because we cannot see these mental
| | 01:26 | processes, but we can
talk to people about them.
| | 01:31 | We can observe for physical behaviors,
and we can ask about cognitive processes,
| | 01:36 | or sometimes infer those
processes based on our observations.
| | 01:39 | For example, when we see someone using
the Back button repeatedly, we can infer
| | 01:45 | that they're lost or uncertain,
but we should confirm that by asking.
| | 01:50 | Designing for behavior is more than
just knowing where to place a button, or
| | 01:54 | how to make it look clickable, and
since we've already discussed many of these
| | 01:57 | mental processes, let's focus on crafting
interfaces that facilitate interactions.
| | 02:03 | Let's design for behavior.
| | Collapse this transcript |
| Perceived affordances| 00:00 | How do we know where to interact?
| | 00:02 | Interfaces include both static
information, and opportunities for interaction.
| | 00:07 | Not everything is interactive,
| | 00:08 | though in the digital world, it could be.
| | 00:11 | When using this travel site, hipmunk,
when we first look at it, we see some clear
| | 00:16 | opportunities for interaction,
because they look like buttons and tabs.
| | 00:20 | But as we mouse around, we find
additional interaction opportunities, because on
| | 00:24 | rollover, information appears.
| | 00:27 | In fact, we can even filter our
search by dragging these bars on the sides.
| | 00:33 | What we may think is just plain, read-only
information is actually part of the
| | 00:38 | interactive experience.
| | 00:40 | We need to provide perceivable and noticeable
cues for people to invite them to interact.
| | 00:46 | Our understanding of how much of the
physical world works seems to be innate.
| | 00:51 | James Gibson used the term affordances
to describe action possibilities that are
| | 00:56 | latent in the environment.
| | 00:58 | In other words, the physical characteristics
of objects make possible what we're
| | 01:03 | able to do, and we recognize when we're
able to interact with that opportunity.
| | 01:08 | Affordances are always relative to the
individual, and their ability to interact.
| | 01:12 | For example, on a sound mixer, the
knobs look like they can be turned,
| | 01:17 | buttons look like they can be pushed, and
glides look like they can be moved along tracks.
| | 01:22 | Affordances don't require conscious
attention and thought, and we might not even
| | 01:26 | recognize when our behavior
is based on an affordance.
| | 01:29 | Affordances seem to arise out of an
intrinsic understanding of our physical
| | 01:34 | relationship with the world around us.
| | 01:36 | For example, when hiking, we may choose
an uphill path that minimizes the effort
| | 01:41 | necessary, without even realizing that
we have chosen a path that optimizes
| | 01:46 | distance, time, and energy.
| | 01:48 | Digital interfaces, however,
are not real physical objects;
| | 01:52 | they're images, symbols, or
representations of reality.
| | 01:55 | A button on a Web page only resembles
a physical button; it can't really be
| | 02:00 | pressed, it doesn't really move, and
it doesn't provide the same tactile
| | 02:04 | and auditory feedback.
| | 02:06 | A button on a Web page is just a
projection; it's light on a screen,
| | 02:11 | yet we still understand that this
digital button presents an opportunity to
| | 02:15 | interact, because the appearance
of the button relies on a perceived
| | 02:19 | affordance: it looks like a button, so we
interact with it as if it were a real, physical button.
| | 02:26 | Donald Norman introduced the
concept of the perceived affordance.
| | 02:29 | When people perceive a similarity
between a digital representation, and an
| | 02:33 | actual physical object, they
understand that they may interact with this
| | 02:37 | digital image in a similar way.
| | 02:40 | This is a perceived affordance.
| | 02:42 | A button on a Web site often has the
appearance of depth, and looks like it can be
| | 02:46 | pushed down, just as a physical button can.
| | 02:49 | The physical button has an affordance,
and the Web button, which is just an image
| | 02:53 | made of light, has a perceived affordance.
| | 02:56 | We can take advantage of affordances
by creating interface objects that look
| | 03:00 | like something with which we can interact.
| | 03:02 | It might be an obvious and analogous
relationship, like buttons and sliders, but
| | 03:07 | it can also be more subtle, like textures.
| | 03:10 | Since real world objects are
three-dimensional, we can use common depth cues
| | 03:14 | such as shadow, perspective, overlap, and blur.
| | 03:18 | Textures invite interaction, because
they look like something we could feel, like
| | 03:22 | water, or grass, or sand.
| | 03:25 | We treat the mouse cursor like an
extension of our hand or finger, and we will
| | 03:29 | often click or tap on things
that have a textured appearance.
| | 03:33 | For example, the thumb on a scrollbar
typically has a few marks to indicate a grip zone;
| | 03:38 | a texture we could feel with our
fingertip on a real sliding button.
| | 03:43 | The Microsoft surface lagoon is a
successful demonstration, where a tabletop
| | 03:48 | looks like water in a stream
running over smooth rocks and pebbles.
| | 03:52 | The image of the water has a
perceived affordance for touch.
| | 03:57 | This perceived affordance invites
interaction, and people freely dip their
| | 04:01 | fingers into the stream, and the water
reacts as real water would, with ripples
| | 04:05 | that flow around their fingers.
| | 04:07 | It's just an image of water, a
projection of light, and yet people behave the
| | 04:11 | same way they would if it were
a real stream: they touch it.
| | 04:16 | So affordances arise from physical
properties of real world objects, and
| | 04:21 | perceived affordances arise when
images have a similar appearance.
| | 04:25 | If it looks like something we could
interact with, if we have a mental model for
| | 04:29 | the interaction, if the context is
correct for interaction, and if we are able to
| | 04:34 | interact with it, then we do.
| | 04:37 | For example, if I'm listening to
music, my context, and want to lower the
| | 04:42 | volume, my need, I will look for a
knob, or a sliding control: an expectation
| | 04:47 | from my mental model.
| | 04:49 | But if the digital interface controls
volume with checkboxes, the conceptual
| | 04:53 | model, I may fail at my task, or at
least will be slowed down by cognitive
| | 04:57 | friction while I try to find and
understand how to control the volume.
| | 05:02 | The experience will be
confusing, and less enjoyable.
| | 05:06 | But why do people press buttons on
interfaces, even when those buttons lack
| | 05:10 | depth cues, or barely resemble buttons?
| | 05:13 | Because we learn from our experience. We
transfer what we learn in one situation
| | 05:18 | to other similar situations.
| | 05:20 | We generalize what we've learned.
| | 05:23 | If it looks enough like a button, we'll
interact with it as if it were a button,
| | 05:27 | even if the perceived affordances are weak.
| | 05:31 | If you're designing an interaction
that's new and unfamiliar, it would be
| | 05:35 | more important to leverage perceived
affordances, and maybe even explicit instructions.
| | 05:40 | When drag-and-drop interfaces first
appeared on Web sites, we had to use
| | 05:44 | explicit instructions, and provide cues
about what could be dragged, and where
| | 05:48 | it could be dropped.
| | 05:49 | But now that drag-and-drop is more
familiar, and people have learned, we have
| | 05:53 | simplified the experience
with fewer instructions and cues.
| | 05:57 | On touch devices, people assume that
drag-and-drop is available, because moving
| | 06:02 | icons with our fingers is
even more like the real world.
| | 06:06 | This is direct action,
and we'll discuss it soon.
| | 06:10 | If you're designing an interaction
that is very familiar and common, you have
| | 06:14 | more flexibility, and may be able to
design an interface with fewer perceived
| | 06:18 | affordances, because people will
already understand the interface, and apply
| | 06:23 | their past experiences to it.
| | Collapse this transcript |
| Inputs and sensors| 00:00 | In the early days of computers,
information was input with physical switches,
| | 00:04 | papertape, and punchcards.
| | 00:06 | Eventually, we moved onto keyboards
and mice, which are still some of the most
| | 00:10 | common ways we interact with interfaces today.
| | 00:12 | Modern devices have moved beyond
keyboards, mice, and touchpads to include
| | 00:17 | touchscreens, spatial gestures, and voice.
| | 00:21 | Data entry maybe explicit -- that is,
we actively enter the information, and
| | 00:25 | interact with the interface -- or
implicit -- that is, sensors in the devices
| | 00:30 | automatically detect and record
information, often with little or no
| | 00:33 | interaction on our part.
| | 00:36 | Explicit interactions include
typing, gesturing, and speaking.
| | 00:40 | We choose or identify the information we
want to add or modify, and then we take
| | 00:44 | the time to enter it.
| | 00:46 | We type e-mails, dictate text messages,
take photos to upload or share, and even
| | 00:51 | use gestures to play games.
| | 00:53 | Implicit, or automatic data entry
includes light sensors, GPS, and compasses,
| | 00:59 | microphones, accelerometers, and other
ways to detect and record information in
| | 01:03 | the world around us.
| | 01:05 | Light sensors are used to automatically
adjust the brightness of a screen, GPS
| | 01:10 | can identify our location and direction,
and accelerometers detect the movement
| | 01:14 | and rotation of the device.
| | 01:17 | We don't need to tell our tablet
computer to rotate the view of the screen;
| | 01:21 | it does so automatically when we
rotate the device, because it senses the
| | 01:25 | movement and direction.
| | 01:28 | Many interactions exist in a gray
area between explicit and implicit.
| | 01:33 | Digital cameras use face detection
algorithms to identify people in the
| | 01:37 | frame, and focus on them.
| | 01:39 | We aim the camera, and it
knows where to focus.
| | 01:43 | The camera is the sensor.
| | 01:44 | The explicit interaction relies on
the person to choose the photographic
| | 01:48 | subject, and say when to take the
picture, and the implicit action uses
| | 01:53 | algorithms to focus on it.
| | 01:56 | Digital devices can make our
interactions easier by automatically performing
| | 02:00 | some of the steps in the task.
| | 02:02 | The device becomes an extension of us.
| | 02:05 | We need to understand what people
want to control, and what they want to
| | 02:09 | have done for them.
| | 02:10 | But we also need to identify
opportunities to do things for people by
| | 02:15 | anticipating their needs, knowing what
information is necessary to meet those
| | 02:19 | needs, and being able to automatically
collect some of that information,
| | 02:23 | rather than ask for it.
| | 02:24 | We can design smarter, more enjoyable
experiences by doing the work that people
| | 02:30 | don't need or want to do.
| | 02:32 | For example, GPS navigation devices are
very popular, because all a person needs
| | 02:38 | to do is tell it where they want to go.
| | 02:40 | The device detects their current location,
and their direction of movement, then
| | 02:44 | calculates the best route to their
desired destination, and provides
| | 02:48 | step by step directions.
| | 02:50 | If the person misses a turn, the device
automatically recalculates the route, and
| | 02:55 | provides new directions.
| | 02:57 | The person never has to request the
next step, or a correction; once the device
| | 03:02 | knows the destination,
everything else is automatic.
| | 03:05 | This reduces the cognitive load of the
navigation task, provides structure and
| | 03:10 | predictability, and tolerates errors.
| | 03:14 | We can use mental models to help guide
our decisions about how to reduce effort
| | 03:18 | and cognitive load by reducing the
amount of information we need to ask for.
| | 03:24 | What information, or inputs, must be
provided by the person, and how will they enter it?
| | 03:30 | What information could
be automatically gathered?
| | 03:33 | How and where will information and
feedback be displayed for the person?
| | 03:38 | We can then design for specific
behaviors by leveraging device sensors,
| | 03:42 | contexts, and needs.
| | 03:44 | A popular music service for mobile
phones listens to songs, and tells the person
| | 03:49 | the song artist, and title, and
even offers to help them buy it.
| | 03:53 | The interaction is simple;
| | 03:55 | the person activates the application on
their phone, and presses a Listen button.
| | 03:59 | The application uses the phone's
microphone to sample sound from the
| | 04:03 | environment, and sends it to a server
where it's analyzed and compared against a
| | 04:07 | huge catalogue of music, and when a
match is found, the information is sent
| | 04:12 | back to the phone, and displayed as
text and images; the song title, artist
| | 04:16 | name, and the CD cover.
| | 04:18 | The application uses implicit, or
automatic sensing to detect sound.
| | 04:22 | The person doesn't even need to tell
the application when to stop listening.
| | 04:26 | When enough sound has been sampled,
the application automatically stops.
| | 04:31 | The entire process closely matches
our mental model for identifying
| | 04:35 | and purchasing a song.
| | 04:36 | Listen, think, recognize, and buy.
| | 04:41 | The application performs a complex
task for us, and the entire interaction is
| | 04:46 | simple, and nearly effortless.
| | 04:49 | So it's not necessary to
ask people for everything.
| | 04:53 | We can reduce effort by using sensors
to automatically gather information that
| | 04:57 | can help us design smarter experiences
to do more of the work. Then we only
| | 05:02 | need to stop and ask people for
information when we can't gather or find it
| | 05:07 | another way.
| | Collapse this transcript |
| Designing for clicks and taps| 00:00 | Although touchscreens are popular, and
have become quite familiar, the mouse and
| | 00:04 | touchpad are still some of the most
common ways to interact with an interface,
| | 00:07 | especially for desktop and laptop computers.
| | 00:10 | Even though they are different input
methods, they do share some similarities.
| | 00:14 | There are best practices that apply
to both touch and mouse, with just a few
| | 00:18 | adjustments for each.
| | 00:20 | Using a mouse to control a cursor,
or tapping on a screen, is called a
| | 00:23 | ballistic movement.
| | 00:25 | A ballistic movement starts with a
general direction, accelerates toward the
| | 00:29 | target, slows down and refines the aim
as it approaches the target, and finally,
| | 00:34 | acquires, or hits the target if it's accurate.
| | 00:38 | If we approach a target too quickly,
we're likely to miss and overshoot.
| | 00:42 | If we don't accelerate enough, it may
take too long to reach the target, or we
| | 00:46 | may fall short of it.
| | 00:48 | This type of movement has been
studied and described mathematically.
| | 00:51 | Both Fitts' Law and Meyer's Law can be
used to calculate the time to reach a
| | 00:55 | target, as a function of the distance to
the target, and the size of the target.
| | 01:01 | Simply put, it's easier to successfully
hit targets that are close and large
| | 01:06 | than to hit targets that are far and small.
| | 01:09 | If you're concerned about efficiency --
that is, reducing time -- and accuracy -- that
| | 01:14 | is, reducing errors -- then the
distance to, and the size of a target are
| | 01:19 | important considerations.
| | 01:21 | For both mouse and touch, the farther
we have to move to reach the target, the
| | 01:26 | longer it will take.
| | 01:28 | The easiest target is the one
already under the cursor or finger;
| | 01:32 | we don't need to move at all to hit it.
| | 01:35 | When using a mouse, there are a few other
places where targets are easy to acquire;
| | 01:40 | the corners and edges of the screen.
| | 01:42 | Most screens have bounded edges, that is,
the cursor stops when it reaches them.
| | 01:47 | This means that we can rapidly move or
fling the cursor toward a target in a
| | 01:52 | corner, or along an edge, and not worry
as much about slowing down and refining
| | 01:56 | our aim, because the edges will either
funnel the cursor into the corner for us,
| | 02:01 | or the cursor will stop
automatically when it reaches the edge.
| | 02:05 | As long as our initial direction is
accurate, we can very quickly reach the target.
| | 02:10 | Edges and corners are not as
beneficial for touchscreens, though, because our
| | 02:14 | fingers can move beyond and off the screen,
| | 02:17 | we can still miss those targets.
| | 02:20 | So how do we take advantage
of the best target location?
| | 02:24 | Although, we can't move buttons around
the screen to match the cursor location,
| | 02:28 | we can use contextual menus, tool
tips, pop-ups, and heads-up displays to
| | 02:34 | present information and
functionality right where the cursor is.
| | 02:38 | We can open menus and dialogs near the
cursor, so that the person doesn't have
| | 02:43 | to move the cursor, or their finger,
very far in order to make a choice, and we
| | 02:47 | can make these layers draggable, so people
can move them out of the way when necessary.
| | 02:53 | Size is also a factor;
| | 02:55 | larger targets are easier to acquire,
but since we have limited space on the
| | 02:59 | screen, we need to balance the size
of the interactive elements with other
| | 03:03 | content and images on the screen.
| | 03:05 | Operating systems and electronic
devices all have human interface guidelines
| | 03:11 | to set the standards for things like the
size and placement of buttons, tabs, and icons.
| | 03:16 | These standards ensure consistency and
usability, and if you're designing for
| | 03:20 | native applications, such as iOS,
Windows, or Android, make certain you've read
| | 03:26 | the guidelines for that system.
| | 03:29 | Calculating target size is
complicated by two things;
| | 03:32 | pixel density, and physical dimensions.
| | 03:35 | As screens become smaller, and higher
resolution, the number of pixels increases.
| | 03:41 | So a button that is 30 pixels square
on a 1024 by 768 resolution monitor will
| | 03:48 | need to be 44 pixels square on a 1600 by 1200
resolution monitor just to look the same size.
| | 03:56 | And for touchscreens, we need to
consider the physical size of the human finger;
| | 04:01 | it's not measured in pixels,
| | 04:02 | so we need to be able to convert the
physical size of a finger into the pixel
| | 04:06 | dimensions of a touch target on the screen.
| | 04:09 | When the screen resolution and size
change, we may need to recalculate the
| | 04:13 | size of the touch target.
| | 04:16 | Just hitting a target isn't always enough.
| | 04:19 | When using a mouse, some navigational
systems on Web sites, and some toolbars on
| | 04:23 | applications, use nested flyout menus.
| | 04:27 | As long as the cursor remains over
the flyout menu, it remains visible.
| | 04:32 | But if you need to move the cursor
through a narrow channel into a submenu, it
| | 04:36 | can be difficult to keep
the correct, or any menu open.
| | 04:40 | Keeping a menu or a layer visible based
on the cursor location can be challenging.
| | 04:46 | The more time and effort it takes
just to move the cursor slowly for
| | 04:49 | accuracy, the less time and cognitive
effort people have available to think
| | 04:54 | about the information, make correct
choices, make correct decisions, and
| | 04:58 | enjoy the interaction.
| | 05:00 | Finally, we need to consider timing
for the appearance of things that appear
| | 05:04 | when the cursor moves over a target.
| | 05:07 | Just because the cursor is near or over
a target does not mean that the person
| | 05:12 | wants or needs to see a new layer appear.
| | 05:14 | They may simply be moving their
cursor to another part of the screen, and
| | 05:18 | unwanted layers, motions, and
transitions that occur while they are moving the
| | 05:23 | cursor are an unwanted distraction.
| | 05:25 | Let's take a look at how a simple
delay can prevent a navigation menu from
| | 05:30 | opening when it's not necessary.
| | 05:34 | As we move the cursor quickly across
this navigation, it doesn't react to us.
| | 05:39 | However, if I want to see what's in the
navigation, I need only pause for a moment.
| | 05:44 | This is convenient because if I'm moving
the cursor from the body of the page to
| | 05:49 | a link at the top of the page, the
navigation system doesn't react, but when I
| | 05:54 | need it, it appears quickly.
| | 05:57 | If the delay before displaying something
is 0 milliseconds, then any time the
| | 06:02 | cursor appears the hot zone, the
interface will react instantly.
| | 06:07 | The simplest way to reduce the number of
unwanted displays is to use a brief delay.
| | 06:11 | For example, a delay of just 150 to 250
milliseconds before the interface reacts
| | 06:18 | will help people moving the cursor over
the hot zone to get to another part of
| | 06:23 | the screen without triggering the interaction.
| | 06:25 | But if someone wants to hit that target,
then they will pause the cursor in the hot zone.
| | 06:30 | The delay will elapse quickly,
and the reaction will occur.
| | 06:34 | We could also use a more elaborate
system called hover intent, where we measure
| | 06:39 | the velocity, acceleration,
and deceleration of the cursor.
| | 06:43 | If the velocity is constant, or the
cursor speed is accelerating, then we can
| | 06:49 | infer that the person is moving
over the hot zone to go somewhere else.
| | 06:54 | However, if the cursor is decelerating,
and velocity approaches 0 over the hot
| | 06:59 | zone, then we can infer that the
person wants to hit the target,
| | 07:03 | so we activate the display.
| | 07:05 | Finally, remember that there
is no hover on touchscreens,
| | 07:09 | so touch interfaces need to be designed to
rely more on explicit tapping and gestures.
| | 07:16 | When designing for taps and clicks, we
have the opportunity to reduce effort by
| | 07:20 | making certain that things are
easy to reach, by bringing content and
| | 07:24 | functionality closer to the point of
interaction, and by ensuring that the
| | 07:28 | interface only reacts when
people expect it, and need it.
| | Collapse this transcript |
| Providing opportunity for direct action| 00:00 | As computer technology has evolved,
we've moved from a very remote and symbolic
| | 00:05 | form of interaction, to more direct action.
| | 00:07 | In the early days, flipping switches to
set binary values, and punching holes in
| | 00:12 | cards, required making decisions, and
coding data, and then waiting hours, or even
| | 00:16 | days, to get the results.
| | 00:18 | We were very far removed from the
information we were trying to understand.
| | 00:23 | The command-line interface, monitors, and
terminals made things faster, and easier
| | 00:27 | to correct, but we were still
working with abstract representations of
| | 00:31 | information and processes.
| | 00:33 | The arrival of the graphical user
interface, also called GUI, and the mouse
| | 00:38 | suddenly made it possible to
interact more directly with meaningful
| | 00:41 | representations of information.
| | 00:43 | We could drag pictures into photo
albums, and documents to the printer.
| | 00:48 | This was more like the real world, but
we were still using a mouse out here to
| | 00:53 | click a button over there to make
something else happened somewhere else.
| | 00:57 | Touch screens are getting us closer to
direct action. We can now directly touch
| | 01:02 | the article we want to read, the photo
we want to see, the song we want to hear,
| | 01:08 | and move solitaire cards with a gesture
very similar to playing with real cards.
| | 01:13 | Touchscreens and gestures are more
direct than a mouse, because a mouse is a
| | 01:17 | translated movement and
action, but touch is direct.
| | 01:22 | We can tap what we want, instead of
moving a mouse to move a cursor, then
| | 01:26 | clicking on what we want.
| | 01:28 | Drag-and-drop for both touch and mouse
is more direct; we don't need to select
| | 01:33 | an object, then select an action for it.
| | 01:36 | For example, when I rearrange photos
in my Flickr account, I can simply click,
| | 01:43 | and drag that photo to its new
location, and as I move it, you can see that the
| | 01:48 | grid opens up a place for me to drop that photo.
| | 01:52 | Every time I move a photo, the grid
rearranges itself, and it clearly communicates
| | 01:57 | to me, this is an open and valid
position; an excellent form of feedback.
| | 02:04 | Layers of content are more direct,
because they bring information and
| | 02:08 | functionality to the person at the
moment they need it, without disrupting
| | 02:12 | context or flow, and they can give people a
greater sense of control over the interaction.
| | 02:18 | Sometimes the layer is displayed
on demand, and sometimes the layer is
| | 02:21 | automatically displayed, based on context.
| | 02:25 | There are several advantages of direct
action; the interface is often visually
| | 02:29 | simpler, because the interactive
objects are represented directly, and other
| | 02:33 | content or functionality is
presented only when needed.
| | 02:38 | The interaction is easier to learn,
because there are no layers of abstraction
| | 02:42 | between the person and the information.
| | 02:45 | The interaction is easier to remember,
because it often corresponds more
| | 02:49 | closely to our real world experiences.
| | 02:52 | People make fewer errors, because the
interaction is a closer fit to our mental
| | 02:57 | models, and expectations.
| | 02:59 | It encourages exploration; when people
can interact with almost everything, and
| | 03:05 | have relevant content and functionality
brought to them at that moment, then
| | 03:10 | they become discoverers, and they ask, oh,
what can I do here? What will happen
| | 03:15 | if I do this? Or that?
| | 03:17 | The experience is more pleasing and
satisfying, because it's easy, makes sense,
| | 03:22 | and is more engaging.
| | 03:24 | Although direct action isn't always
possible, we should strive to not separate
| | 03:29 | the action from the object.
| | 03:31 | If you need to select an object in one
place, and choose an action in another,
| | 03:36 | or if the interaction does not map
closely to our mental models, and feels like
| | 03:40 | there are extra steps, then there are
opportunities to simplify, and to be more direct.
| | 03:47 | Take the time to critically review the
interaction design, and the flow of the experience,
| | 03:52 | because we sometimes make design choices
based on the technology, rather than the
| | 03:56 | person, and when we're designing for
the efficiency of the machine, we fail to
| | 04:01 | consider the expectations
and experiences of the person.
| | 04:05 | Direct action is becoming more
important, and more available, because
| | 04:08 | touchscreens and spatial gestures are
changing the way people interact with
| | 04:12 | information on their devices.
| | Collapse this transcript |
|
|
11. Best Practices for Providing FeedbackDefining feedback for interaction design| 00:00 | Feedback is the information we get
about the results of our actions.
| | 00:04 | It helps us understand if we
are progressing toward our goal,
| | 00:08 | if there have been any errors, where
we are, and how to modify our future
| | 00:12 | behavior to keep ourselves on track.
| | 00:15 | Feedback may be obvious and attention
grabbing, such as warning messages that
| | 00:19 | cover the entire screen, or it may be subtle
and nuanced, such as background color, or
| | 00:24 | icons to communicate status.
| | 00:27 | Feedback is actually a way to provide
either reinforcement, keep doing that, or
| | 00:32 | discouragement, stop doing that.
It's essential to helping people understand
| | 00:37 | the results of their actions, as well
as helping them learn new interactions.
| | 00:42 | Feedback provides information about
place, to help us understand where we are;
| | 00:48 | time, to help us understand what is
currently happening, and what will or may happen;
| | 00:53 | meaning, to help us understand the
results or outcomes of our actions, and it
| | 00:58 | helps us to establish context.
| | 01:00 | We'll discuss each of these more in
depth soon, but here's a quick introduction.
| | 01:06 | Just as landmarks and signs give us a
sense of place in the physical world,
| | 01:11 | feedback and content cues give us a
sense of place in the digital world.
| | 01:15 | There is a scent of information that
helps us establish a trail back to where
| | 01:20 | we've been, and points us
forward to where we may go.
| | 01:24 | In the midst of interacting, we seek
feedback about our progress, and what
| | 01:28 | our devices are doing.
| | 01:30 | It helps answer common questions, like,
should I wait a little longer? Am I almost
| | 01:34 | finished? What's going to happen next?
| | 01:38 | When we've completed an action, we want
to know if we have been successful; is
| | 01:43 | the outcome what we expected?
| | 01:45 | We look for connections
between what we did, and the results.
| | 01:49 | Feedback also helps decide if the
outcomes are valuable, or worthwhile.
| | 01:53 | Desirable and expected results
increase the likelihood that we will perform
| | 01:57 | those actions again in the future.
| | 01:59 | We learn new behaviors when the results of
our actions are rewarding, and benefit us.
| | 02:04 | On the other hand, if the outcomes are
undesirable, unexpected, or incorrect, then
| | 02:10 | these negative results decrease the
likelihood that we will perform those
| | 02:13 | actions again in the future.
| | 02:15 | We learn to stop, or avoid behaviors when the
results of our actions cost us time and effort.
| | 02:22 | Interaction design is a bit like physics:
| | 02:24 | for every action, there should be a reaction.
| | 02:28 | For most interactions, there should be
some form of feedback: a noticeable and
| | 02:32 | understandable reaction
from the interface or device.
| | 02:36 | We need to design ways to
meaningfully acknowledge interactions.
| | 02:41 | Failing to acknowledge an interaction
can lead to unnecessary repetition of
| | 02:45 | actions, and possibly even errors or mistakes.
| | 02:49 | For feedback to be effective, it must
be prompt, meaningful, and perceivable.
| | 02:54 | We need to present feedback soon after
the interaction has occurred, otherwise
| | 02:58 | people will think their
actions were not detected.
| | 03:02 | Feedback must be meaningfully related
to a person's actions, otherwise they may
| | 03:06 | not understand why the
interaction produced the results.
| | 03:10 | And we need to draw or direct attention
to the feedback, because people may not
| | 03:14 | be focused on the interface, or the
device, at the time feedback is presented.
| | 03:19 | Finally, feedback should not
interrupt the person's experience unless it is
| | 03:23 | necessary to prevent errors.
| | 03:26 | Feedback should complement the
experience, not complicate it.
| | Collapse this transcript |
| Deciding on a feedback format| 00:01 | Feedback that's not noticed is not useful.
| | 00:04 | We need to deliver feedback in a
manner that's both noticeable, and in
| | 00:08 | the appropriate format.
| | 00:09 | The vast majority of interfaces
provide feedback visually, but we can also use
| | 00:13 | sound, and tactile, or haptic, feedback.
| | 00:17 | Location is important for visual feedback.
| | 00:20 | We can get attention more quickly
by placing feedback where the person
| | 00:23 | is already looking.
| | 00:25 | Remember, we are sensitive to change,
and to certain colors, so presenting
| | 00:29 | feedback in a layer with strong
visual cues can easily attract attention.
| | 00:34 | Color, icons, motion, and text can all
be used to provide feedback visually, and
| | 00:39 | this feedback may be direct, such as a
message, ambient, such as color, or both.
| | 00:45 | For example, a confirmation
message that profile information has been
| | 00:50 | updated uses all three.
| | 00:52 | When Save is clicked, a new
message appears near the top of the page.
| | 00:57 | It has a different fill color to draw
attention, and green communicates success.
| | 01:02 | It contains an icon that illustrates
success, a brief text message explains what
| | 01:06 | was completed, and the entire message
area may be timed to fade and collapse
| | 01:11 | after a specific interval,
to reduce interference.
| | 01:15 | As we design for an increasing number
of touchscreens, our fingers become an
| | 01:19 | important way of receiving feedback.
| | 01:22 | Vibration is the most common
form of tactile, or haptic, feedback.
| | 01:27 | Our mobile phones have been vibrating
for years to alert us to phone calls, text
| | 01:31 | messages, and status updates, but they
also use vibration to provide feedback
| | 01:35 | when typing, and pressing buttons.
| | 01:38 | Another type of tactile feedback is
force feedback, though we probably only
| | 01:43 | encounter this with games,
and training simulators.
| | 01:46 | Force feedback is when a device or
controller provides resistance to our
| | 01:50 | actions, and that
resistance can vary in intensity.
| | 01:54 | For example, if a lever would be
difficult to move in real life, then the
| | 01:59 | joystick or lever controller in the game or
simulator would also be difficult to move.
| | 02:05 | This type of feedback is usually tied
to real world physics, so it provides
| | 02:09 | realistic feedback about how
much effort we should expand.
| | 02:13 | Sound can also be used to provide
feedback. Beeps and bells often tell us that
| | 02:19 | we need to pay attention to something,
or that a process has started, ended, or
| | 02:23 | encountered an error.
| | 02:25 | Simple auditory feedback is used to
acknowledge button presses and taps on
| | 02:29 | touchscreens and kiosks.
| | 02:31 | These sounds are often brief and subtle.
They barely attract our attention, yet
| | 02:36 | they provide important information.
| | 02:39 | We can even mark progress with sound.
The ticking of a mechanical timer tells us
| | 02:44 | that the cake is still baking.
| | 02:46 | Then it buzzes when the cake is finished.
| | 02:49 | Elevators beep as they pass each floor,
then they make a different sound when
| | 02:54 | they arrive, and the doors open.
| | 02:56 | Auditory feedback can be important
when designing for accessibility;
| | 03:01 | when sound must supplement what would
typically be only a visual feedback.
| | 03:06 | The sound itself may also convey meaning.
| | 03:08 | Happy sounds indicate success, but
unpleasant sounds indicate problems, or failure.
| | 03:14 | Part of the positive or negative
meaning of the sound comes from the quality or
| | 03:18 | timbre of the sound itself.
| | 03:21 | Irritating noises are naturally
unpleasant and negative, like the wrong answer
| | 03:26 | sound on game shows, but part of the
meaning we attribute to sound comes from
| | 03:31 | experience and learning.
| | 03:33 | Desktop operating systems expose people to
subtle sound cues and feedback every day.
| | 03:39 | We learn what those sounds mean, and
often we learn the meaning so well that we
| | 03:43 | do not even need to look at
the accompanying text message.
| | 03:47 | And of course, games rely extensively on
sound to add excitement and interest, as
| | 03:51 | well as provide feedback about the game play.
| | 03:55 | The topic of sound design is becoming
increasingly important as our devices do
| | 03:59 | more and more for us.
| | 04:02 | One of the most challenging aspects of
providing feedback is that touchscreen
| | 04:06 | gestures and spatial gestures leave no trace.
| | 04:10 | Once we have touched or gestured,
it's up to us to remember what we did.
| | 04:15 | Accidental touches and gestures may
cause changes in the device or interface, and
| | 04:19 | we may not know how or why it happened.
| | 04:22 | The change is feedback that an
interaction was processed and completed, but
| | 04:27 | if the touch or gesture was unintentional,
people may not know how to undo or repeat it.
| | 04:33 | How do we tell people what they did?
| | 04:36 | When typing, I see the
letters appear for each keypress.
| | 04:40 | When dragging, I see the new position
of the object, but with many multitouch
| | 04:44 | or spatial gestures, there is no visible
or remaining record of my interactions.
| | 04:50 | These are interaction
design challenges we still face.
| | 04:54 | Not only how to communicate which
gestures will cause what actions before the
| | 04:59 | interaction occurs, but how to represent
which gestures were performed after the
| | 05:04 | interaction occurred.
| | 05:07 | One possible approach is to use overlays and
animation to visually represent the gestures.
| | 05:13 | Applications for touchscreen devices
often use an overlay when teaching people
| | 05:17 | how to use an application, and
communicating which gestures are available.
| | 05:23 | Feedback comes in many formats, and
even subtle or ambient information is
| | 05:28 | valuable to meaningful interactions,
as long as it's prompt and relevant.
| | Collapse this transcript |
| Place, time, and meaning| 00:01 | Some feedback is vitally important, like
the warning that you're about to delete
| | 00:04 | your entire music collection, or
transfer all of your money out of your bank
| | 00:08 | account, but most
feedback is casually informative.
| | 00:12 | It's simply keeping us aware of our
place, and time, our progress, and helping us
| | 00:17 | understand our context.
| | 00:19 | We previously discussed effective
navigation, and many of the techniques we
| | 00:23 | described are based on delivering
relevant and timely feedback to help
| | 00:27 | establish a sense of place.
| | 00:29 | One important, but often overlooked
aspect of feedback for navigation are the
| | 00:34 | active, hover, visited,
down, and disabled states.
| | 00:38 | These quickly and efficiently communicate
both the opportunity for interaction,
| | 00:42 | as well as confirmation of the action.
| | 00:46 | The active state is the appearance of
a link, button, tab, or icon before we
| | 00:50 | interact with it; its appearance must
communicate the opportunity to interact.
| | 00:55 | The hover state changes
appearance when we approach the element.
| | 00:59 | The down state acknowledges
that our click or tap was received.
| | 01:04 | The visited state serves as a visual
reminder that we have interacted with this
| | 01:09 | element already, and the disabled state
communicates that the interaction is not
| | 01:13 | currently available, so that we avoid
an interaction that will have no result.
| | 01:19 | When people are engaged in interacting
with a process or application, they
| | 01:23 | want to know its status.
| | 01:25 | Is it ready for me to interact?
| | 01:27 | Graphics, text, and motion can easily
communicate readiness, such as a blinking
| | 01:32 | cursor, or a microphone
icon labeled with, speak now.
| | 01:36 | Is the process still going?
| | 01:39 | Progress bars, spinning graphics,
and text labels communicate progress.
| | 01:44 | Has it frozen, or crashed?
| | 01:45 | Should I be patient and wait, or worry and panic?
| | 01:49 | The absence of motion, or prolonged
waiting with no visible progress,
| | 01:53 | typically causes concern.
| | 01:55 | If a process will take a long time, that
is, longer than people might expect, then
| | 02:01 | warn them in advance, with an
estimate of the amount of time.
| | 02:05 | Has it finished? Can I continue?
| | 02:08 | Confirmation messages, updated information,
and returning the interface to a state
| | 02:13 | of readiness for interaction, all
signal that the process is complete, and the
| | 02:17 | interface is ready to proceed.
| | 02:20 | For some processes, we're able to
calculate, or at least estimate, the time it
| | 02:24 | will take, so we're able to
communicate the progress, and time remaining,
| | 02:28 | relatively accurately.
| | 02:29 | For example, if we know the amount of
data, and the transfer rate, we can provide
| | 02:34 | an informative estimate of the time to
complete the transfer, or the percent of
| | 02:39 | the transfer that's complete.
| | 02:41 | For these processes where we know time
and amount, we should display a visual
| | 02:45 | indicator of progress, and
information about the process, such as percent
| | 02:50 | complete or remaining.
| | 02:52 | However, some processes are indefinite;
we're not able to accurately estimate
| | 02:57 | how long it will take, or
how much has been completed.
| | 03:00 | When we are unable to estimate duration,
or communicate specific progress, we need
| | 03:05 | to clearly communicate that the
process is still active, and underway.
| | 03:10 | For indefinite processes, we are
often only able to show a repeating
| | 03:14 | animation, and a message, such as a
spinning indicator, with please wait while
| | 03:19 | we complete your request;
| | 03:20 | this may take a few moments.
| | 03:23 | If it's possible to estimate the
average range of time to complete a process,
| | 03:27 | then include this information, so
that people will know what to expect.
| | 03:32 | Provide the ability to cancel a process
without cost or lost whenever possible.
| | 03:37 | Having an option to stop and recover
can be reassuring if people know they have
| | 03:42 | a way to try again safely.
| | 03:45 | For long or complex tasks, indicate
the successful completion of each step.
| | 03:51 | In most cases, we simply need to
acknowledge that the step has been completed, and
| | 03:55 | not interfere with proceeding efficiently.
| | 03:57 | We use inline messages, placed where
people will see them; color, motion, and
| | 04:02 | fading in and out can all direct and
focus attention on the subtle feedback.
| | 04:07 | However, sometimes we need the
person to acknowledge the completion of an
| | 04:11 | individual step, so we give them an
informative message, and wait for them to
| | 04:16 | dismiss it, or to act and
move on to the next step.
| | 04:20 | This may be important for unfamiliar
or difficult tasks to help people
| | 04:24 | understand what's been done, and why.
| | 04:27 | This type of interim status message
can also be used to reduce errors, such as
| | 04:32 | the confirmation requests for actions
that may be difficult to undo or correct.
| | 04:39 | For example, are you sure you want to
remove all eBooks from your Kindle? As with
| | 04:43 | all feedback, it's really only useful
when people notice it, and understand it.
| | 04:49 | So don't hide confirmation messages
behind clicks or taps, and don't remove them
| | 04:53 | automatically without sufficient
time for people to notice, and read them.
| | 04:58 | When a process is complete, people
expect a confirmation message, or action.
| | 05:03 | It may be either a success, or a failure.
Simply allowing a progress indicator to
| | 05:08 | disappear may not be sufficient.
| | 05:12 | Failures, or errors, often mean that the
person needs to make a correction, and try again.
| | 05:17 | We'll talk more about error handling,
and error messages in a moment, because
| | 05:21 | they are very important to eventual success.
| | Collapse this transcript |
| Error handling and messages| 00:00 | We may not always understand why we
get a desirable outcome; we're just happy
| | 00:05 | with the positive results, but when
something goes wrong, we are eager to know
| | 00:09 | what happened, why it
happened, and what to do about it.
| | 00:13 | Error messages are a particularly
important type of feedback, because they help
| | 00:17 | us understand what went wrong, and how
to improve our actions in the future.
| | 00:22 | Error messages can also be
difficult to design well.
| | 00:25 | When errors occur, people become
frustrated, and this interferes with their
| | 00:30 | ability to focus on,
understand, and resolve the problem.
| | 00:34 | The best type of error
handling is error prevention.
| | 00:38 | If we can design to prevent
errors, then we have a better design.
| | 00:42 | However, no device or interface is
perfect, so we still need to design effective
| | 00:48 | feedback for those times when something fails.
| | 00:51 | Let's start with the error message itself.
| | 00:54 | Effective error messages
have four characteristics.
| | 00:57 | Use natural language, and never blame
the person. Clearly describe what went
| | 01:03 | wrong. Briefly explain why it went wrong,
and recommend how to fix or resolve the error.
| | 01:12 | Place error messages as
near the error as possible.
| | 01:16 | Many sites and applications use two
types of error message: a generic message
| | 01:21 | that errors have occurred, often placed
near the top of the page or browser, and a
| | 01:25 | specific message near the error itself.
| | 01:29 | Use color, such as yellow, orange, or red,
to draw attention, and indicate urgency.
| | 01:36 | Subtle transitions, such as a fade, or
changing the color from yellow to green, can
| | 01:41 | be used to remove error messages
after a correction has been made.
| | 01:46 | Provide error messages as soon as possible.
| | 01:50 | It's a better experience to correct an
error immediately after it's occurred, and
| | 01:55 | it's also easier to understand
the error if it's caught right away.
| | 01:59 | Form fields and inputs may be validated or
checked for errors on a per field, or per page, basis.
| | 02:07 | Validation per field means that each
form field or input is checked for errors
| | 02:13 | as soon as information is entered.
| | 02:15 | This is more immediate,
| | 02:17 | will only display one error at a
time, and allows people to correct errors
| | 02:21 | before they proceed too far.
| | 02:23 | Validation per page means that multiple
form fields are checked for errors only
| | 02:29 | when the entire set is complete.
| | 02:31 | This is less immediate, may display
more than one error at a time, and may occur
| | 02:37 | only after people have
entered a lot of information.
| | 02:40 | If there are multiple errors on a
page, display them all, and allow people to
| | 02:45 | make multiple corrections at once.
| | 02:48 | It is a bad experience to display one
error at a time when there are multiple
| | 02:52 | errors on a page, because people will
be returned to the same page repeatedly
| | 02:57 | until all the errors have been fixed.
| | 02:59 | This is frustrating, and often causes confusion.
| | 03:03 | We also need to make a distinction
between errors and mistakes.
| | 03:07 | An error occurs when an action or
process cannot be completed correctly, such as
| | 03:12 | the inability to complete a
transaction, because the credit card number was
| | 03:16 | entered incorrectly.
| | 03:18 | A mistake is an incorrect choice, or
information that does not prevent the
| | 03:23 | action or process from being completed, but
the end result is not anticipated, or desired.
| | 03:28 | For example, I may have intended to
transfer $100 from my savings to my checking
| | 03:35 | account, but I typed 1000.
| | 03:38 | If my savings account has $1000
in it, then no error will occur.
| | 03:42 | I'll have made a mistake, but
the transfer will happen.
| | 03:47 | Feedback should confirm the
interaction, and the outcome, and when
| | 03:51 | interactions are important, such as
transferring money from your bank
| | 03:55 | account, the action should be verified,
and we should have the opportunity to
| | 03:59 | correct or undo mistakes.
| | 04:02 | When we make an error or mistake, the
ability to undo can be very important.
| | 04:07 | We may not understand why the mistake
occurred, and we may simply want to go back
| | 04:11 | to where we were, and try again.
| | 04:14 | Remember, the best type of error
handling is preventing errors in the first
| | 04:19 | place, but errors do happen sometimes,
so it's important to help people
| | 04:23 | understand what happened, and how to fix
it, without causing worry or frustration.
| | Collapse this transcript |
| Feedback cycle| 00:00 | Feedback is a key component of the
principles of interaction design, because
| | 00:04 | it's central to how we gain
experience, and learn from our interactions.
| | 00:09 | Our interactions should lead to
perceivable, noticeable feedback, and when we
| | 00:13 | understand it, we learn from it.
| | 00:15 | We use this information to learn, and
practice, and get better, and we take what we
| | 00:20 | learn, and apply it in other
situations, for other interfaces and devices.
| | 00:24 | The cycle continues, and before long,
we have learned behaviors, and acquired
| | 00:29 | expectations about how to
interact with the digital world.
| | 00:33 | Feedback provides information that
helps us understand our experiences.
| | 00:37 | It provides reinforcement that
strengthens learning, and shapes our behavior.
| | 00:42 | It motivates us to keep interacting,
keep trying, and keep seeking new and
| | 00:46 | different experiences, and it can even
lead to moments of surprise and delight
| | 00:50 | that bring us pleasure, and a sense of
fun and play with unexpected successes, or
| | 00:55 | confirmation of achievement.
| | Collapse this transcript |
|
|
ConclusionReviewing the big picture| 00:00 | We've covered many topics, and seen
many examples in this course, but there is
| | 00:04 | still so much more we can discuss.
| | 00:06 | Interaction design is an ever changing field.
| | 00:09 | Devices, technology, and expectations
evolve, and there will always be a need to
| | 00:13 | craft new interactions, and create
new solutions, and best practices.
| | 00:18 | Even though our designs may change, and
technology advances, our brains still
| | 00:23 | think, process, and
remember information the same way.
| | 00:26 | We learn new behaviors, and we have new
expectations, but we can still rely on
| | 00:31 | the five principles of interaction
design to help us craft better designs, and
| | 00:35 | solutions that make it easy for people
to interact with interfaces and devices,
| | 00:40 | no matter how complex the technology may become.
| | 00:43 | Remember, these five essential
principles all work together in a system.
| | 00:48 | When we perceive the opportunity to
interact, and predict that the outcome is
| | 00:52 | desirable, we will interact.
| | 00:55 | Meaningful feedback helps us
understand and learn from our actions.
| | 00:59 | As we practice and
observe, our understanding grows.
| | 01:03 | When we encounter similar consistent
interfaces and devices, we are able to
| | 01:07 | transfer what we have
learned, and interact more easily.
| | 01:11 | If we understand how people think, what
they need and expect, how they learn and
| | 01:16 | remember, what motivates them, and how
they react and feel, then we can identify
| | 01:21 | and deliver effective solutions.
| | 01:24 | We still need to remain current and
knowledgeable about technology and design
| | 01:28 | practices, so stay connected and
active in the interaction design community.
| | 01:33 | Keep your skills polished and honed
with books, videos, tutorials, seminars,
| | 01:38 | conferences, and memberships in
professional organizations. And remember,
| | 01:43 | interaction design is not about the
behavior of the interface; it's about
| | 01:48 | the behavior of people.
| | Collapse this transcript |
|
|