From the course: Advanced SQL: Logical Query Processing, Part 1

Using the code files - SQL Tutorial

From the course: Advanced SQL: Logical Query Processing, Part 1

Start my 1-month free trial

Using the code files

- [Instructor] Throughout this course I will demonstrate and reinforce the discussion topics using live queries. You can just watch or follow along with me, but even better, pause the video occasionally and experiment for yourself by modifying these queries and coming up with your own to make sure you fully understand what's happening under the covers. I've also included a few mini challenges where I will ask you explicitly to pause the video to answer a short puzzle. Chapters two and five include a challenge video and my detailed suggested solution walkthrough. All the code files can be downloaded from the Courses homepage and use SQL Server syntax unless otherwise noted. These files will be hosted on GitHub as well and by now you know where you'll find the links. So, that's about it for the introduction. It's time to dive into the rabbit hole of SQL query logical processing from which there's no going back. But before we dive in, let's quickly see where we're heading by following query processing order. Every query begins with a FROM clause, which constructs the query's dataset. This is the only data that will be available for the rest of the query. After the FROM clause produces a single dataset, it is passed on to the WHERE clause as a virtual table and the WHERE clause uses binary and ternary logical predicates to filter out individual rows. The filtered set is then passed on to the GROUP BY clause where it is grouped and from there to the HAVING clause where entire groups can be filtered. The grouped and filtered set is then passed on to the SELECT clause, which evaluates every expression for each row. Then, it is passed on to the ORDER BY clause to apply presentation ordering and finally, the OFFSET FETCH clause, also known as LIMIT OFFSET, determines which and how many rows will be returned. So without further ado, let's head onto the first step, constructing the source dataset.

Contents