I've been developing software for a few years without doing much estimation at all, and the estimation I've been doing has been really vague, eg "You know that feature is a lot of work, right?". I've recently been reading more and more from the #noestimates movement on The Twitter, so I thought I'd chime in a bit with my rationale and experience as well.
Yes, but I don't think the lack of estimation contributes to that. Estimation has a bunch of downsides to it that make eschewing it pretty rational:
You've probably heard Parkinson's Law that "work expands to fill the time available". In my opinion it's a super-cynical way of thinking about people but I've known many people that believe it. If it seems true to you in your organization consider these factors that might be contributing:
If you have trust and the right people, the team will move really fast, especially over the long haul.
Just cut it all immediately. Decide the absolute essentials of the feature, and cut the rest immediately with no mercy. Deliver those essentials first. Then incrementally try to fit in the next most valuable aspects.
This is the only sane way to ensure you're doing the highest value work first. You don't need scheduling -- you just need prioritization. The stuff that gets cut will be the low-priority stuff.
That doesn't mean you're going to get it. If you've been in software development for any length of time, you know that estimates are often wrong and you haven't figured out how to make them better. In complex systems, prediction rarely leads to predictability.
Instead of trying to predict, you should be aiming to try to mitigate risk as early and often as possible, by doing the riskiest, least clear, and highest value proposition efforts first in the leanest way possible.
This allows you to incrementally move to the more proven, clear, and next-highest-value efforts over time, and have an effort that observably converges on completion. It's a better shot at predictability, but without prediction.
This is the worst of the arguments, in my opinion. I love employing best practices when I don't have a better way but otherwise they're the enemy continuous improvement. Estimates on their own deliver no user-value. Any effort that isn't yielding comparable results should be axed. Any goal that can be better served by other methods should just be solved by those methods.
Let's flip the script.
As an engineer I've yet to meet anyone in management that is willing to give me estimates on the expected value of a feature (preferrably in dollars, but I'll take whatever the proxy/vanity metric of the day is too!). This would be super-valuable to ensure we're prioritizing the most impactful stuff first right? And we could check the validity of these estimates after the feature is released right?
I think this would be a hilarious way to turn the tables and see how management does with estimation of complex systems, but in the end I think it would be similarly fruitless for improving predictability. Their estimates would be just as wrong, just as often. They'd be just as nervous about being confronted on their accuracy too.
Estimation just doesn't really provide us with much predictability considering it's cost.
That doesn't mean that no one should ever do it -- it will definitely make sense in some cases -- but I personally think it's overused, largely ineffective, and often destructive in most of the cases that it's used.