Suppose you have a very large dataset - far too large to hold in memory - with duplicate entries. You want to know how many duplicate entries, but your data isn't sorted, and it's big enough that sorting and counting is impractical. How do you estimate how many unique entries the dataset contains?
As recently as this week, I’ve been involved in conversations with customers about how we can help make their teams deliver more predictably. How can they meet commitments on all levels of the organization, including project, program, and portfolio?