As much as I hate to play devil's advocate here, I don't think that paper backs up all the claims made by the Cosmos author. Here's why: 1. The result just shows that certain QMC problems take exponential time/resources to simulate.* This means that if we are living in a simulation, the universe simulating us would be exponentially larger than our universe, which does suggest that maybe Musk's argument that it's highly likely that we live in a simulation is flawed. But, it does not suggest that it's impossible for us to live in a simulation. 2. These results are only for classical algorithms. One of the big draws of quantumn computing is, well, efficient simulation of quantumn physics! So this paper doesn't eliminate the possibility we live in a simulation on a quantumn computer.** In short: simulation isn't impossible, and math/science has yet to determine that simulation must be difficult. Right now, we don't know an efficient simulation technique, but we also don't know that we can't find one. * But is the problem itself in EXP, or is it perhaps NP-hard and it's just that the only known algorithms to solve the problem take exponential time? Some reading leads me to believe it's "merely" NP-hard, which would mean that efficient (read: polynomial-time) simulation on a classical machine is possible if P=NP. ** Going off the assumption this problem is NP-hard, efficient polynomial-time quantumn simulation may not be a foregone conclusion. On the other hand, if efficient simulation implies that we are being simulated on a quantumn machine, what does that say about the "host" universe's physics?
The fact that it requires exponential power to calculate a linear relationship suggests that size hits an asymptote, not that better efficiency would solve the problem. You can't clear the discontinuity out of a tangent curve and still have trigonometry. div by zero isn't a hole you can fill through optimism. It is the abyss staring back at thee.
What I'm saying is that, if this problem is NP-hard, then we don't know that that exponential relationship has to be there. It's possible that there's an algorithm that takes linear time to simulate this linear relationship. Now, between you and me, I would bet money that there is not such a thing, but I'm playing devil's advocate here! Want to know something fun? If that exponential relationship does hold, then the size of the simulated universe that 'fits' in the 'initial' universe is proportional to the log of the host universe's size, and the size of the simulated universe that fits in that universe is proportional to the log of its host universe's size... ...which for all practical purposes means that you get at most 5 levels of simulation. Probably a lot less. Thinking about non-asymptotic functions that nonetheless grow so slowly that they might as well is fun!
This is not my field and I've been plenty wrong before, but this: Indicates to me that the method is not "can we do this with our current models" but "if this model can be done, it indicates that the approach is physically possible." Do you read that differently? Their conclusion: In other words, the argument isn't "the way we do the computation doesn't scale" the argument is "we can tell we're working on an nth order model of an n^2th order problem which indicates that any interpretation of the problem leads to kaboom." They do give you kinda-sorta an out: Which, by my read, means "we didn't also club it down by eliminating it through this other approach." However, if "determinant QMC" disproved their hypothesis I expect they would have addressed it. But then, I went all the way to Diff EQ. I'm well out of my depth. For the sake of clarity in this work, we do not address the possibility of nonlocal approaches to QMC, such as determinant QMC or cluster algorithms |Establishing an obstruction to a classical simulation is a rather ill-defined task. A related, yet more concrete, goal is to find an obstruction to an efficient quantum Monte Carlo (QMC), which is one of the chief numerical workhorses in the field. In the QMC, one considers the Euclidean (or thermal) partition function of a quantum system in d dimensions as a statistical mechanical (SM) model in d + 1 dimensions with the extra dimension being imaginary time (5). If this can be carried out such that the resulting SM model has Boltzmann weights that are local and non-negative, then efficient QMC sampling is possible (6).
Although QMC offers essentially numerically exact solutions for a wide variety of many-body quantum problems, in the case of aforementioned systems, the Boltzmann weights seem to be unavoidably negative or complex. Notably, signs and phases in the Euclidean partition function can sometimes be a matter of representation. Having a “sign problem” thus means that no local transformation that removes the signs and phases is possible. Notably, the definition of a sign problem must involve the notion of locality [see also the study of Hastings (7)]. By performing a nonlocal transformation on the physical degrees of freedom, one can always diagonalize the Hamiltonian and obtain a classical partition function. However, such a transformation requires computational resources that scale exponentially with the system size. Therefore, for a meaningful definition of the sign problem, one has to add an additional requirement, that is, that the degrees of freedom σ, that enter the Boltzmann weights must be expressed as local combinations of the physical ones or, more precisely, that σ are given by finite depth local quantum circuits acting on the physical degrees of freedom.
For the sake of clarity in this work, we do not address the possibility of nonlocal approaches to QMC, such as determinant QMC or cluster algorithms (8–10).
I am also kind of out of my depth here (Devac, care to chime in if what follows is way off track?), but to my knowledge the interesting bit is here: The problem of performing a nonlocal transformation is in NP: if you knew the exact set of steps to take when performing the transformation, you could do it in polynomial time --- but nobody knows whether or not you can compute the steps to take in polynomial time. What we have now is an exponential-time algorithm for finding those steps --- in this case, diagonalizing the Hamiltonian. Having a “sign problem” thus means that no local transformation that removes the signs and phases is possible. Notably, the definition of a sign problem must involve the notion of locality [see also the study of Hastings (7)]. By performing a nonlocal transformation on the physical degrees of freedom, one can always diagonalize the Hamiltonian and obtain a classical partition function. However, such a transformation requires computational resources that scale exponentially with the system size.