First I'd heard of the confidence interval criticism, but it makes sense. I'm a fan of the squint-test, where if you have to squint to see a difference, it probably doesn't exist (assuming small n). I have the suspicion that this is just the swing of the pendulum, and the journal will return to some numerical metric because scientists like parameterized results.
b_b and I lament it all the time. If you think about it, it's not a very scientific way of making up your mind about something. What is the difference between p = 0.05 and p=0.07? What was the experimental design? How are your data distributed, and what is your confidence in that? What size and type of effect are you talking about? Simply passing or failing a Students t-test or ANOVA doesn't have nearly as much meaning as the weight typically ascribed to it. IMO it's all well and good to provide p values, but not to talk about "statistical significance" unless you have clearly defined the parameters of your definition, and have made a strong case for why they can be applied to your experiment (perhaps was used as an exclusion parameter during the course of experimentation). It's not meant to be a final analysis.