To run a usability test, you recruit representative participants, and then ask them to perform a series of realistic tasks using your product while you watch quietly. When they're done, you have an opportunity to ask them questions about their experience. Then, you typically give them a gratuity to thank them for their time. You can do this at any stage of development from early design sketches on paper, alpha and beta code, through to live products. You can also watch people working with competitive products to gain a comparative understanding of their merits.
Your whole team can and should observe. Normally, you want to put them in a separate room with an audio feed and an image of the participant's screen. The experience of watching test sessions is very educational and gives the team a focus. Often, teams find it hard to justify having everyone watch the sessions, but it gives product managers ideas for new user-centric features. Testers get insight into real world bugs, and addition use cases that they'll need to cover. And developers gain additional user empathy.
After your team has run a couple of rounds of user testing, you'll find that everybody starts considering it the best use of their time. There's very little interpretation involved with usability testing. What you see is how people really work. After the sessions, you group the teams combined observations and analyze them in order to find the biggest customer pain points. Having everyone in the room makes it easy to prioritize fixes based on user need, and come up with workable solutions during that meeting. So, over the course of two days, you can watch five participants working with your product, collect the observations, extract the biggest issues, and then agree on how to change the product to remove those issues.
To learn how to run your own usability tests, check out Chris's in-depth course on the subject, Foundations of UX: Usability Testing.