Pipelines contain a wealth of information on the performance of our software delivery. In this video, learn how to extract and transform this data to enable better and faster release decisions.
- [Instructor] Delivery pipelines … are treasure troves of data for release decision-making, … but we don't always make use of it. … Typical usages of pipeline information … include activity execution time, … how long is our build, … how long are our acceptance tests, … and artifact traceability. … Which code changes were deployed … to production since version X? … Sometimes information is also collected … for external auditors. … Who approved this change and when? … However, there's a lot more data available … in our pipeline tooling. … If we provide IT and business owners with the data they need … in a format that is familiar and easy to consume, … we can drastically reduce the time … to make a release decision. … Engineering teams, on the other hand, … might use historical trends on things like execution time, … for example, our acceptance tests … take 20% longer to run than six months ago, … or overall delivery time, … for example, an average user story takes … 2 1/2 days since initial commit until production …
In this course, instructor Manuel Pais shows leaders how to rethink CD in their organization to boost the speed and safety of their delivery workflow. Manuel explains why true continuous delivery is an organizational capability, and why software releases should be treated as business decisions. Plus, learn how to define a single path to production that balances speed and reliability; extract and transform data to enable faster release decisions; improve key metrics for high performance sustainably; and more.