Bad Astronomy | The expansion measurements of the universe show large discrepancies

There is a problem with the Universe.

Or, possibly, there is a problem with the way we observe it. Anyway, something is fish.

In short, the Universe is expanding. There are a lot of different ways to measure this expansion. The good news is that all these methods are receiving approximate same number for him. The bad news is I don’t get it exact same number. One group of methods receives a number, and another group receives another number.

This discrepancy has existed for a while and is not improving. In fact, it is getting worse (as astronomers like to say, there is a growing tension between methods). The big difference between the two groups is that one set of methods looks at the relatively close things in the Universe, and the other looks at the very distant ones. Either we are doing something wrong, or the Universe is doing something different far than it is near here.

A newly published paper uses a clever method to measure the expansion of nearby galaxies, and what it finds is in line with other “nearby object” methods. Which may or may not help.

OK, backing up … we’ve known for a century or so, and the universe is expanding. We see galaxies moving away from us, and the farther a galaxy is, the faster it seems to move. As far as we can tell, there is a close relationship between the distance of a galaxy and how fast it seems to move away. So, let’s say, a 1 megaparsec galaxy* (Abbreviated Mpc) can move away from us at 70 kilometers per second, and one twice as far (2 Mpc) moves twice as fast (140 km / sec).

This ratio seems to be maintained over long distances, so we call it Constant Hubble or H.0 (pronounced “H naught”), after Edwin Hubble who was one of the first to propose this idea. It is measured in odd units of kilometers per second per megaparsec (or distance speed – something moves faster if it is farther away).

Methods that look at closer objects, such as stars in nearby galaxies, exploding stars, and the like, get H0 to be about 73 km / sec / Mpc. But methods that use more distant things like the cosmic microwave background and the acoustic oscillations of the baryon get a smaller number, more than 68 km / sec / Mpc.

I’m close, but I’m not the same. And given the two methods, all seem consistent internally, this is a problem. What happens?

The new paper uses an interesting method called surface brightness fluctuations. It’s an elegant name, but it involves an idea that is actually intuitive.

Imagine you are standing at the edge of a forest, right in front of a tree. Because you are so close, you see only one tree in your field of vision. Make a little backup and you can see more trees. Back up and you can see more.

Same with galaxies. Observe one nearby with a telescope. In a certain pixel of your camera, you might see ten stars, all blurred in that single pixel. Just because of the statistics, another pixel could see 15 (it’s 50% brighter than the first pixel), another 5 (half as bright as the first).

Now look at a galaxy that is the same in all respects, but twice as far apart. In one pixel you might see 20 stars, and in others you may see 27 and 13 (a difference of ~ 35%). 10 times the distance you see 120, 105 and 90 (a difference of about 10%) – keep in mind that they are way simplifying this here and making only numbers as an example. The idea is that the farther away a galaxy is, the smoother the brightness distribution (the difference between the pixels is smaller compared to the total in each pixel). Not only that, it’s easier in a way that you can measure and assign a number.

In reality it is more complicated than that. If a galaxy makes stars busy in a section, it throws out numbers, so it’s best to look at elliptical galaxies, which haven’t made new stars for billions of years. The galaxy must be close enough that you can get good statistics, which limits this to those that are perhaps 300 million light-years away and closer. You also need to consider the dust and background galaxies in your images and star clusters, and how galaxies have multiple stars toward their centers and … and … and …

But all this is known and quite simple to correct.

When they did all this, the number they got for H0 was (drum …) 73.3 km / sec / Mpc (with an uncertainty of approximately ± 2 km / sec / Mpc) even according to other nearby methods and very different from the other group using remote methods.

In a way that is expected, but again this gives credence to the idea that we are missing something important here.

All methods have their problems, but their uncertainties are quite small. Either we really underestimate these uncertainties (always possible, but unlikely at the moment), or The universe behaves in a way we did not expect.

If I were to bet, I’d go with the latter.

Why? Because it’s been done before. The universe is complicated. We have known since the 1990s that expansion has deviated from a constant. Astronomers saw that exploding stars at a distance were always farther than a simple measure indicated, leading them to believe that the universe was expanding faster than before, which in turn led to the discovery of dark energy – the mysterious entity that accelerates universal expansion.

When we look at very distant objects, we see them as they were in the past, when the Universe was younger. If the rate of expansion of the Universe was different then (say 12 – 13.8 billion years ago) than it is now (less than a billion years ago) we can get two different values ​​for H0. Or maybe different parts of the Universe are expanding at different rates.

If the expansion rate It has changed, which has profound implications. It means that the Universe is not the age we believe in (we use the rate of expansion to overcome age), which means that it has a different dimension, which means that the time required for things to happen is different. It means that the physical processes that took place in the early Universe took place at different times and may involve other processes that affect the rate of expansion.

So yes, it’s a mess. Either we do not understand how the Universe behaves well enough, or we do not measure it correctly. Either way, it’s a huge pain. And we just don’t know what it is.

This new work makes it look even more like the discrepancy is real, and the Universe itself is to blame. But it is not conclusive. We need to continue to do this, to continue to disregard uncertainties, to try new methods, and we hope that at some point we will have enough data to point out something and say, “AHA!

It will be an interesting day. Our understanding of the cosmos will make a big leap when this happens, and then cosmologists will have to find something else to argue about. What they want. It’s a big place, this Universe, and there are a lot of things to worry about.


* A parsec is a unit of length equal to 3.26 light – years (or 1/12 of a Kessel). I know, it’s a strange unit, but it has a lot of historical significance and it’s related to a lot of ways we measure distance. Astronomers looking at galaxies like to use the megaparsal distance unit, where 1 Mpc is 3.26 million light-years. This is slightly longer than the distance between us and the Andromeda galaxy.

.Source