I fear this debate seems to be taking an odd twist into irrelevancy. While it may be true (indeed unsurprising) that human cognition can be modelled most appropriately using concurrency; indeed of a promiscuous nature, I do not see how this indicates or counterindicates that a concurrent model is the best model for human cognition to specify the execution of system.
However self-aware we may be, such awareness of the means how we think does not make us think best using that means. As others have noted, concurrency is hard. it is hard, not only because languages do not facilitate concurrent programming (though many do), but because concurrency (or its abstraction, indeterminacy) is hard. It is trivial to write a plausible concurrent program that is broken, and very, very hard to find and diagnose the bases and source of a bug.
This is not to say that promiscuous determinacy doesn't also add complexity -- it most certainly does. An example as trivial as the swap operation of two values x and y, which is invariably modelled using a temporary variable
t := x; x := y; y := t
introduces complexity unrelated to the problem. Indeed, Edsger Dijkstra emphasized the problem of overdetermining code, favoring therefor the language construct of the concurrent assignment, so swap can be indicated thus:
x,y := y,x
leaving the more detailed sequencing to the "system" to sort out. The problem here is that such nondetermincy is well-constrained and easily implemented in worst case. The sequencing and protection of shared resources in concurrent programs is far more subtle and problematic.
My point is that while it is apparent that overdetermining code sequentially often introduces levels of detail that precludes more elegant expression of correct code, and sometimes distracts from best ways to articulate the code, so, too, does expression of concurrency introduce subtle unstated bugs and requires stating degrees of sequencing, which are often harder to express than merely specifying the sequencing in all its gory detail.