One of the underlying assumptions of singularity arguments is that not only will technology improve sufficiently to hit an inflection point beyond which tools improve themselves to the point of something usefully called intelligence and reproduction, but that this is basically inevitable, as long as we don't destroy ourselves before then. (Whether the singularity would destroy us is another question.) A final assumption is that sufficiently advanced post-singularity machines will be able to preserve or add to themselves, or replicate, by recruiting "dumb" matter far better than current Earth biology can, as we do when we eat and breathe.
If we make the additional assumption that any intelligence in the universe which uses tools and has behavior will incrementally improve those tools - then the same should happen for any other species.
Taking these assumptions as valid, we should assume that the universe we observe should already be heavily influenced by singularity events. But it is NOT obviously behaving in any way that dumb matter doesn't behave. I observed in a previous post that singularity arguments, taken to their conclusion, track Bostrom and Fermi: if these are such powerful principles in the evolution of the universe, shouldn't we already be experiencing the consequences?
Even more generally speaking (outside of singularity arguments) shouldn't we assume that, given enough time, most matter and energy will eventually be locked up into replicators, if living things and/or intelligence continues to expand? It's worth emphasizing that all of the arguments are some version of the self-indication argument, although the where-are-all-the-singularity argument is a hypothetical SIA, which I am using to argue against the probability of a singularity.
The most likely answer, based on what we know so far, is that there have been no singularities, which in turn means that it is less likely than we might otherwise have thought that we will have a singularity. While some version of panspermia seems more and more plausible, the seeding of young worlds with nucleobases and amino acids isn't exactly what people have in mind in these discussions. Indeed the absence of expanding "life clouds" argues not just against singularities as such but against the indefinite expansion and survival of life. But there are a number of possible counterarguments:
- Entropy wins; matter and energy also get locked up into black holes faster than life and/or intelligence can employ that matter for their own preservation.
- By the nature of physics, only a very small fraction of matter and energy can be pressed into service as a substrate for life and intelligence.
- Replicators are always unstable processes. This solves Fermi's paradox by making Drake's omega attrition factor much more influential to the outcome.
- Most of what we observe is indeed the result of such processes (galaxies, stars, our own solar system?) and we either don't have the pattern recognition skills to see it or are only observing a vanishingly small slice of possible data. (This one makes for the best science fiction ideas, and also is more analogous to Bostrom than Fermi.)
- Humans are the only species that uses tools and improves them.
- We're lucky and we're the first, or one of the first, and the expanding sphere of others' computronium hasn't hit us yet.
If I had to bet, I would bet against the last two.