Ready to watch this entire course?
Become a member and get unlimited access to the entire skills library of over 4,971 courses, including more Web and personalized recommendations.Start Your Free Trial Now
- View Offline
- Understanding why forms matter
- Deciding on the form length and structure
- Adding tabs to a form
- Creating required fields
- Adding input masks
- Creating selection-dependent inputs and actions
- Displaying success and error messages
- Adding inline validation
- Understanding gradual engagement
- Enabling touch and audio input on devices
Skill Level Appropriate for all
Not all devices are created equally. The different capabilities of emerging class of devices, gives us new ways to capture input from people. If you go and do a search for mobile input, you will see that the general rule of thumb is to try and avoid it, i.e. limit the use of text input and forms on mobile devices. Across the board, you see the same story, this can be hard, time consuming and frustrating for people. As a result, we should not do it. However, data tells a bit of a different story. Looking at SMS use, we actually see that people are using pretty remedial devices to send a lot of information.
In fact in United States we've got four-billion text messages sent today. More surprisingly, one in every three teenagers sends over 100 text messages a day and it's not just the teens, 72% of adults are sending and receiving texts and feature phone and smart phone users are involved as well. So there is a lot of sharing and a lot of content being created on phones despite their limitations. As a result, why should we limit people just because they are on a smaller screen or a less capable device? In fact, since they have mobile devices with them all the time, a new opportunity is created for capturing input.
These devices are with us wherever and whenever we are, therefore, any time inspiration strikes, we can capture input. Looking at where people actually use their smart phones illustrates this opportunity. 84% of smart phone users are using it at home, through miscellaneous times during the day, waiting in lines, at work, all of these places are great, new opportunities for capturing input. So let's turn the conventional wisdom on its head. Rather than limiting the use of input on mobile devices, let's try and encourage it.
Because people can access their mobile phone anytime and anywhere an inspiration strikes, new ways to collaborate, create, share and comment can happen. Using our designs to encourage that instead of liming it, I think is the way to go. So how do we do that, and what are some of the new capabilities we have at our disposal? First of all, touch. Touch is a new capability making its way in most portable devices. Instead of using just a hard keyboard or a cursor to interact with information, we can actually use our fingers to take care of lots of things.
Here is an example from Yahoo! search, in order to find a restaurant near me, I can type things in or I can just draw a circle in a map. Once that circle is drawn using my finger, the app comes back here are restaurants in that area. I can also draw a line and get similar information. So the capability I am using to provide input here is strictly my fingers. Similarly on Google's Android, I can draw a letter to run a search, I can change that search up with another letter. The touch gestures I am inputting are how I am doing the search instead of typing in through a virtual keyboard.
Touch gestures offer a rich palette for us to actually provide input with. I am not going to get into all the specifics of what's available to you, but I will reference the Touch Gesture reference guide that I created a while ago for few collaborators. You can jump over to lukew.com/touch and find out all the touch gestures that are available to you and how they're traditionally used. Next up is voice. While we can use our fingers or our keyboards or a number of other indirect device capabilities, voice is something that we can speak to.
What do I mean by that? Well, let's look at the Google Nexus One phone, anywhere there is an input field, you can simply tap the microphone and speak, in this case, composing a message. Google also allows you to simply swipe your finger across the keyboard to activate voice mode. So flick across, and speak now. This is great for the one-handed use we find ourselves at in many situations throughout the day. You're walking down the street or in a crowd environment, just swipe your thumb and move into Voice Activated Mode.
You can also see this working on Amazon. Here, to run a search, once again we'll just talk. In fact, the possibilities are endless. Anywhere there is an input field, our voices can provide the kind of data we traditionally reserve for keyboards. You want to update your status on Facebook, same deal, just talk. Touch and Voice are two capabilities we have at our disposal. Another one is location. Traditionally on desktops or laptops if we're dealing with a location based task, we turn to our keyboard. That is, we're going to type in the fact that we're looking for restaurants near San Jose.
The local review service Yelp will give us a listing of what's in San Jose. Now that can span a pretty big area and as a result there's a number of filters for cities, distance, features, pricing and category. In other words, we've got to do a lot of work. Contrast this experience to using Yelp on an iPhone or iPad. Here, the location of the device is known, so all I need to do is tap Restaurants and I get the places that are closest to me displayed instantly. Now, I can determine where I need to eat within a radius of a mile or less, much different than the location information I was given on the desktop.
Similarly, location can be used to give us a whole bunch of different data points. Again, no real input involved. Here, using the appware I can tell that the cheapest price of gas near me is $2.96, what the weather is where I am currently at, local news, reviews about restaurants nearby, or I can also get traffic information, local events, movies and more. Again, this data is coming to me. The device knows my location and it can simply use that as a filter for all the information it has. Knowing a bit about how location is detected on these devices can help us.
Generally, GPS is the most accurate down to 10 meters, but has a lot of problems working indoors and can take quite a bit of time to get up and running. Wi-Fi beacons are much more instant and they don't drain our battery life. However, they are only accurate down the 50 meters. Going even further, we can rely on cell tower triangulation or single cell towers. As you can tell our accuracy goes down the further we move on this list. In the desktop or laptop, all we have is IP and with IP detection, we can only be 99% accurate that we are within a particular country, not a lot of things we can do with that.
We combine location detection with another device capability called orientation. We not only know where you are on the map, we actually know the direction you are facing. This allows us to do some very interesting things. In fact, you can look at the world like this. Using your current position, we can bring in most relevant digital information for you. Let me restate that, we know where you are in the world and the direction you are facing. Based on that, we can pull in the digital information that's most appropriate for you. That's pretty powerful stuff and it's actually available today.
Looking at the location service Yelp that we saw earlier, we can see them giving us information about local restaurants and services right on our field of view. Here, just point your smart phone camera in any direction you please, and you'll get the nearest services, how they're rated and what they are. Yelp originally launched this feature as an Easter Egg. In fact, they didn't think it would get a lot of attraction, but when people discovered it, they actually saw a sustained 40-50% boost. According to Yelp CEO Jeremy Stoppelman, this was really beyond their wildest imaginations.
So there is clearly something here. While these interfaces aren't there yet, the idea of relevant information based on your current position and orientation in the world is very powerful. Again, no input required, just pull out your device and point it in the direction you feel most appropriate. We also can use images as input. Looking at an application like ShopSavvy, we point it at a barcode of say a book; it scans that and finds the book we're looking at. It will tell us how much it cost on the web and how much it cost at local stores.
In fact, it will give us a full listing of all the places we can purchase it online. It will give us all the reviews of the book online which we can filter by Best to Worst. We can also look at where we can get this book nearest and the prices there. If we'd like to, we can look at that on the map and see how long it will take us to actually get the product we're looking at. Or you don't even need to scan a barcode. Using SnapTell, just point your camera at a product, take a photo and it will identify it. Once again we identify the same designing web interactions book. Google Goggles takes us even further and allows you to use images for all types of input.
So you can also point Google Goggles add a book, CD or movie and it will identify it and allow you to purchase it online or even see a sample. You can point it at a wine label, it will also identify that and give you additional information, you can point it at a business card, and using OCR, it will scan the contents of the card and add it to your address book. Works of art also are fair game. Just point Google Goggles at a painting you're looking at and it will identify it for you. You can also point your camera at a landmark. Using your location and the image, Google Goggles will identify what you're looking at.
Pointing the same application at foreign text will actually translate it for you. So your days of ordering something mysterious off a menu in a foreign country are now behind you. Once again we see Google pushing the boundaries. What they've done with their Google Maps feature is allow people to interact with QR codes with any one of their most popular locations from Google Places. Popular location from Google Places will place a QR code in their window which again you just point your camera phone app. It will scan the code and tell you the restaurant you're looking at, along with their phone number, average reviews, details, menu, whatever information it has online.
This idea of interacting with a physical place simply by capturing an image of it, is pretty interesting. QR codes though aren't limited to physical locations. In fact, Facebook is experimenting with putting QR codes on profile pages. The bottom-line is there are a lot of modern device capabilities that allow us to capture input in new and very compelling ways. We've talked a little bit about multi-touch, location detection, device positioning, orientation but as you can see in this list, there's a large number that I haven't even touched on yet, who knows what ways of capturing input we can use of these capabilities, only the future will tell.
Device capabilities give us new ways to rethink an input by taking advantage of things like location detection, multi- touch sensors, integrated video cameras and audio. The ways we can use these capabilities to capture input can move us way beyond web forms. Instead of filling in input fields and labels, just start talking to your device. Instead of filling in a whole form, just point your camera at something. This really revolutionizes the ways we can get input and moves us way beyond static web forms which I for one, think is very exciting.